I just added the mx4j-tools jar (version 3.0.2) in my lib folder and also
enabled remote jmx access without authentication (using firewall to protect
access).
During startup I can see the two following log statements:
HttpAdaptor version 3.0.2 started on port 8081
mx4j successfuly loaded
So, it
Hi,
You can set the property gc_warn_threshold_in_ms in yaml.For example, if your
application is ok with a 2000ms pause, you can set the value to 2000 such that
only gc pauses greater than 2000ms will lead to gc and status log.
Please refer
Hi Carlos,
Please check if the JIRA :
https://issues.apache.org/jira/browse/CASSANDRA-11467 fixes your problem.
We had been facing row count issue with thrift cf / compact storage and this
fixed it.
Above is fixed in latest 2.1.14. Its a two line fix. So, you can also prepare a
custom jar and
Hi All,
I have cluster of 7 nodes completely balanced (each node owns ~500GB of
data).
And I have one keyspace and one table and three replicas. Than, I just
failed one node's disk, replace it with a new one and started repairing.
During that process I noticed that additional two nodes have
Hi Everyone.
Kindly reply in "yes" or "no", as to whether it is possible to setup
encryption only between particular pair of nodes?
Or is it an "all" or "none" feature, where encryption is present between
EVERY PAIR of nodes, or in NO PAIR of nodes.
Thanks and Regards,
Ajay
On Mon, Apr 18,
Hi Carlos,
Why running a repair there if the topology did not change? Is it as a best
practice, just in case, or is there a specific reason?
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France
The Last Pickle - Apache Cassandra Consulting
Hi Anuj,
> You could do the following instead to minimize server downtime:
>
> 1. rsync while the server is running
> 2. rsync again to get any new files
> 3. shut server down
> 4. rsync for the 3rd time
>
5. change directory in yaml and start back up
>
+1
Here are some more details about that
Hi,
Currently StatusLogger will log info when there are dropped messages or GC
more than 200 ms.
In my use case, there are about 1000 tables. The status-logger is logging
too many information for each tables.
I wonder is there a way to reduce this log? for example, only print the
thread pool
Hi,
This one is old, do you still need help there? Sorry we missed it.
1. What Cassandra version do you use?
2. What does "nodetool tpstats" show you. Any dropped or pending message?
3. Is your error a full heap memory issue or native one?
4. What configurations did you change from
I just run it to be sure. Sometimes mistakes happen and it's a way to be
sure.
Em 25/04/2016 10:19, "Alain RODRIGUEZ" escreveu:
> Hi Carlos,
>
> Why running a repair there if the topology did not change? Is it as a best
> practice, just in case, or is there a specific reason?
Hi,
We have 2.0.14. We use RF=3 and read/write at Quorum. Moreover, we dont use
incremental backups. As per the documentation at
https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html
, if i need to restore a Snapshot on SINGLE node in a cluster, I
11 matches
Mail list logo