Re: How to prevent the removed DC comes back automactically?

2014-08-14 Thread Mark Reddy
How can we prevent a disconnected DC from coming back automatically? You could use firewall rules to prevent the disconnected DC from contacting your live DCs when it becomes live again Mark On Thu, Aug 14, 2014 at 6:48 AM, Lu, Boying boying...@emc.com wrote: Hi, All, We are using

Re: How to prevent the removed DC comes back automactically?

2014-08-14 Thread Artur Kronenberg
Hey, not sure if that's what you're looking for but you can use auto_bootstrap=false in your yaml file to prevent nodes from bootstrapping themselves on startup. This option has been removed and the default is true. You can add it to your configuration though. There's a bit of documentation

RE: How to prevent the removed DC comes back automactically?

2014-08-14 Thread Lu, Boying
Thanks a lot. But we want to block all the communications to/from the disconnected VDC without reboot it. From: Artur Kronenberg [mailto:artur.kronenb...@openmarket.com] Sent: 2014年8月14日 17:00 To: user@cassandra.apache.org Subject: Re: How to prevent the removed DC comes back automactically?

Re: Secondary indexes not working properly since 2.1.0-rc2 ?

2014-08-14 Thread Fabrice Larcher
Hello, I created https://issues.apache.org/jira/browse/CASSANDRA-7766 about that Fabrice LARCHER 2014-08-13 14:58 GMT+02:00 DuyHai Doan doanduy...@gmail.com: Hello Fabrice. A quick hint, try to create your secondary index WITHOUT the IF NOT EXISTS clause to see if you still have the bug.

A question to nodetool removenode command

2014-08-14 Thread Lu, Boying
Hi, All, We have a Cassandra 2.0.7 running in three connected DCs, say DC1, DC2 and DC3. DC3 is powered off, so we run 'nodetool removenode' command on DC1 to remove all nodes of DC3. Do we need to run the same command on DC2? Thanks Boying

Re: A question to nodetool removenode command

2014-08-14 Thread Mark Reddy
Hi, Gossip will propagate to all nodes in a cluster. So if you have a cluster spanning DC1, DC2 and DC3 and you then remove all nodes in DC3 via nodetool removenode from a node in DC1, all nodes in both DC1 and DC2 will be informed of the nodes removal so no need to run it from a node in DC2.

RE: A question to nodetool removenode command

2014-08-14 Thread Lu, Boying
Thanks a lot ☺ From: Mark Reddy [mailto:mark.re...@boxever.com] Sent: 2014年8月14日 18:02 To: user@cassandra.apache.org Subject: Re: A question to nodetool removenode command Hi, Gossip will propagate to all nodes in a cluster. So if you have a cluster spanning DC1, DC2 and DC3 and you then

Communication between data-centers

2014-08-14 Thread Rene Kochen
Hi all, I have a question about communication between two data-centers, both with replication-factor three. If I read data using local_quorum from datacenter1, I see that digest requests are sent to datacenter2. This is for read-repair I guess. How can I prevent this from happening? Setting

Re: Communication between data-centers

2014-08-14 Thread DuyHai Doan
dclocal_read_repair_chance option on the table is your friend http://www.datastax.com/documentation/cassandra/2.0/cassandra/reference/referenceTableAttributes.html?scroll=reference_ds_zyq_zmz_1k__dclocal_read_repair_chance On Thu, Aug 14, 2014 at 4:53 PM, Rene Kochen rene.koc...@schange.com

Re: Communication between data-centers

2014-08-14 Thread Rene Kochen
I am using 1.0.11, so I only have read_repair_chance. However, after testing I see that read_repair_chance does work for local_quorum. Based on this site: http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architectureClientRequestsRead_c.html I got the impression that

Re: Communication between data-centers

2014-08-14 Thread Robert Coli
On Thu, Aug 14, 2014 at 9:24 AM, Rene Kochen rene.koc...@emea.schange.com wrote: I am using 1.0.11, so I only have read_repair_chance. I'm sure this goes without saying, but you should upgrade to the head of 1.2.x (probably via 1.1.x) ASAP. I would not want to operate 1.0.11 in production in

Re: How to prevent the removed DC comes back automactically?

2014-08-14 Thread Robert Coli
On Thu, Aug 14, 2014 at 1:59 AM, Artur Kronenberg artur.kronenb...@openmarket.com wrote: not sure if that's what you're looking for but you can use auto_bootstrap=false in your yaml file to prevent nodes from bootstrapping themselves on startup. This option has been removed and the default

question about OpsCenter agent

2014-08-14 Thread Clint Kelly
Hi all, I just installed DataStax Enterprise 4.5. I installed OpsCenter Server on one of my four machines. The port that OpsCenter usually uses () was used by something else, so I modified /usr/share/opscenter/conf/opscenterd.conf to set the port to 8889. When I log into OpsCenter, it says

Could table partitioning be implemented using a customer compaction strategy?

2014-08-14 Thread Kevin Burton
We use log structured tables to hold logs for analysis. It's basically append only, and immutable. Every record has a timestamp for each record inserted. Having this in ONE big monolithic table can be problematic. 1. compactions have to compact old data that might not even be used often. 2.

Bootstrap failures: unable to find sufficient sources for streaming range

2014-08-14 Thread Peter Haggerty
When adding nodes via bootstrap to a 27 node 2.0.9 cluster with a cluster-wide phi_convict_threshold of 12 the nodes fail to bootstrap. This worked a half dozen times in the past few weeks as we've scaled this cluster from 21 to 24 and then to 27 nodes. There have been no configuration or