Re: Differing snitches in different datacenters

2019-07-29 Thread Paul Chandler
Hi Voytek, I looked into this a little while ago, and couldn’t really find a definitive answer. We ended up keeping the GossipingPropertyFileSnitch in our GCP Datacenter, the only downside that I could see is that you have to manually specify the rack and DC. But doing it that way does allow

CDC enabled settings and performance impact

2019-07-29 Thread Krish Donald
Hi, We need to enable CDC in one of the cluster which is on DSE 5.1. We need to change below settings : cdc_enabled cdc_raw_directory cdc_total_space_in_mb cdc_free_space_check_interval_ms What is the value you keep it for below? cdc_total_space_in_mb cdc_free_space_check_interval_ms Is there

RE: Jmx metrics shows node down

2019-07-29 Thread ZAIDI, ASAD A
Another way to purge gossip info from each node is to: 1. Gracefully stop cassandra i.e. nodetool drain; kill Casandra PID 2. Move/delete files from $DATADIR/system/peers/ 3. Add JVM_OPTS="$JVM_OPTS -Dcassandra.load_ring_state=false" in jvm.options file 4. Restart Cassandra service. 5.

Re: [EXTERNAL] Apache Cassandra upgrade path

2019-07-29 Thread Jai Bheemsen Rao Dhanwada
Thank you Romain On Sat, Jul 27, 2019 at 1:42 AM Romain Hardouin wrote: > Hi, > > Here are some upgrade options: > - Standard rolling upgrade: node by node > > - Fast rolling upgrade: rack by rack. > If clients use CL=LOCAL_ONE then it's OK as long as one rack is UP. > For higher CL it's

Re: Differing snitches in different datacenters

2019-07-29 Thread Voytek Jarnot
Just a quick bump - hoping someone can shed some light on whether running different snitches in different datacenters is a terrible idea or no. It'd be fairly temporary, once the new DC is stood up and nodes are rebuilt, the old DC will be decomissioned. On Thu, Jul 25, 2019 at 12:36 PM Voytek

Re: Jmx metrics shows node down

2019-07-29 Thread yuping wang
Is there workaround to shorten 72 hours to something shorter?(you said by default, wondering if one can set a non-default value?) Thanks, Yuping On Jul 29, 2019, at 7:28 AM, Oleksandr Shulgin wrote: > On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy wrote: > > Decommissioned 2 nodes from

Re: Jmx metrics shows node down

2019-07-29 Thread yuping wang
We have the same issue. We observed the JMX only cleared after exactly 72 hours too. On Jul 29, 2019, at 11:23 AM, Rahul Reddy wrote: And also system.peers table doesn't have the information on old nodes only ghost nodes to be there in JMX > On Mon, Jul 29, 2019, 7:39 AM Rahul Reddy

Re: Jmx metrics shows node down

2019-07-29 Thread Rahul Reddy
And also system.peers table doesn't have the information on old nodes only ghost nodes to be there in JMX On Mon, Jul 29, 2019, 7:39 AM Rahul Reddy wrote: > We removed many times nodes from a cluster but never seen the jmx metric > down stay for 72 hours. So it has to be completely removed

Re: When Apache Cassandra 4.0 will release?

2019-07-29 Thread Pandey Bhaskar
Thanks Simon. Really good to know about it. I was trying to configure and its working. On Fri, Jul 26, 2019 at 9:56 PM Simon Fontana Oscarsson < simon.fontana.oscars...@ericsson.com> wrote: > Hi, > > To my knowledge there is no set date for 4.0, the community is > prioritizing QA over fast

Re: Jmx metrics shows node down

2019-07-29 Thread Rahul Reddy
We removed many times nodes from a cluster but never seen the jmx metric down stay for 72 hours. So it has to be completely removed from gossip to show the metric as expected? This would be problem for using the metric to alert on call On Mon, Jul 29, 2019, 7:28 AM Oleksandr Shulgin <

Re: Jmx metrics shows node down

2019-07-29 Thread Oleksandr Shulgin
On Mon, Jul 29, 2019 at 1:21 PM Rahul Reddy wrote: > > Decommissioned 2 nodes from cluster nodetool status doesn't list the > nodes as expected but jmx metrics shows still those 2 nodes has down. > Nodetool gossip shows the 2 nodes in Left state. Why does my jmx still > shows those nodes down

Jmx metrics shows node down

2019-07-29 Thread Rahul Reddy
Hello, Decommissioned 2 nodes from cluster nodetool status doesn't list the nodes as expected but jmx metrics shows still those 2 nodes has down. Nodetool gossip shows the 2 nodes in Left state. Why does my jmx still shows those nodes down even after 24 hours. Cassandra version 3.11.3 ? Anything

Re: RMI TCP Connection threads

2019-07-29 Thread Dinesh Joshi
Try obtaining a thread dump. It will help debug. Anything that goes via JMX such as nodetool could be responsible for it. Dinesh > On Jul 28, 2019, at 10:57 PM, Vlad > wrote: > > Hi, > > suddenly I noticed that one of three nodes started consume CPU in RMI

Re: RMI TCP Connection threads

2019-07-29 Thread Jeff Jirsa
Someone running nodetool or exporting metrics via JMX are the two mostly likely explanations > On Jul 28, 2019, at 10:57 PM, Vlad wrote: > > Hi, > > suddenly I noticed that one of three nodes started consume CPU in RMI TCP > Connection threads. > > What it could be? > > Thanks.