Re: Change from single region EC2 to multi-region
1) There are ways to connect two VPCs using VPN. 2) About the connectivity using public IP. Can you ping the one public ip from another one in a different region. If ping works, please check port connectivity using telnet. You can start a temp server on a port using netcat. If connectivity fails, you need to looks into your routing tables to allow connectivity on the public ip addresses. On Tue, Aug 11, 2015 at 7:51 PM, Asher Newcomer asher...@gmail.com wrote: X-post w/ SO: link https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup was as follows: 172.18.100.110 - seed - DC1 / RAC1 172.18.100.111 - DC1 / RAC1 172.18.100.112 - seed - DC1 / RAC2 172.18.100.113 - DC1 / RAC2 All of the above nodes are in East-1D, and I have configured it using the GossipingPropertyFileSnitch (I would rather not use the EC2 specific snitches). listen_address broadcast_address were both set to the node's private IP. I then wanted to expand the cluster into a new region (us-west). Because cross-region private IP communication is not supported in EC2, I attempted to change the settings to have the nodes communicate through their public IPs. listen_address remained set to private IP broadcast_address was changed to the public IP seeds_list IPs were changed to the appropriate public IPs I restarted the nodes one by one expecting them to simply 'work', but now they only see themselves and not the other nodes. nodetool status consistently returns: Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1 UN [public IP of local node] 75.91 GB 256 ? 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1 On each individual node, the other nodes seem 'stuck' using the private IP addresses. *How do I force the nodes to look for each other at their public addresses?* I have fully opened the EC2 security group/firewall as a test to rule out any problems there - and it hasn't helped. Any ideas most appreciated.
Re: Change from single region EC2 to multi-region
broadcast_address to public ip should be the correct configuration. Assuming your firewall rules are all kosher, you may need to clear gossip state? http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html -- Forwarded message -- From: Asher Newcomer asher...@gmail.com Date: Tue, Aug 11, 2015 at 11:51 AM Subject: Change from single region EC2 to multi-region To: user@cassandra.apache.org X-post w/ SO: link https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup was as follows: 172.18.100.110 - seed - DC1 / RAC1 172.18.100.111 - DC1 / RAC1 172.18.100.112 - seed - DC1 / RAC2 172.18.100.113 - DC1 / RAC2 All of the above nodes are in East-1D, and I have configured it using the GossipingPropertyFileSnitch (I would rather not use the EC2 specific snitches). listen_address broadcast_address were both set to the node's private IP. I then wanted to expand the cluster into a new region (us-west). Because cross-region private IP communication is not supported in EC2, I attempted to change the settings to have the nodes communicate through their public IPs. listen_address remained set to private IP broadcast_address was changed to the public IP seeds_list IPs were changed to the appropriate public IPs I restarted the nodes one by one expecting them to simply 'work', but now they only see themselves and not the other nodes. nodetool status consistently returns: Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1 UN [public IP of local node] 75.91 GB 256 ? 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1 On each individual node, the other nodes seem 'stuck' using the private IP addresses. *How do I force the nodes to look for each other at their public addresses?* I have fully opened the EC2 security group/firewall as a test to rule out any problems there - and it hasn't helped. Any ideas most appreciated.
Re: Change from single region EC2 to multi-region
Use VPC Peering rather than VPN, More reliable. On Tue, Aug 11, 2015 at 5:14 PM, Prem Yadav ipremya...@gmail.com wrote: 1) There are ways to connect two VPCs using VPN. 2) About the connectivity using public IP. Can you ping the one public ip from another one in a different region. If ping works, please check port connectivity using telnet. You can start a temp server on a port using netcat. If connectivity fails, you need to looks into your routing tables to allow connectivity on the public ip addresses. On Tue, Aug 11, 2015 at 7:51 PM, Asher Newcomer asher...@gmail.com wrote: X-post w/ SO: link https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup was as follows: 172.18.100.110 - seed - DC1 / RAC1 172.18.100.111 - DC1 / RAC1 172.18.100.112 - seed - DC1 / RAC2 172.18.100.113 - DC1 / RAC2 All of the above nodes are in East-1D, and I have configured it using the GossipingPropertyFileSnitch (I would rather not use the EC2 specific snitches). listen_address broadcast_address were both set to the node's private IP. I then wanted to expand the cluster into a new region (us-west). Because cross-region private IP communication is not supported in EC2, I attempted to change the settings to have the nodes communicate through their public IPs. listen_address remained set to private IP broadcast_address was changed to the public IP seeds_list IPs were changed to the appropriate public IPs I restarted the nodes one by one expecting them to simply 'work', but now they only see themselves and not the other nodes. nodetool status consistently returns: Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1 UN [public IP of local node] 75.91 GB 256 ? 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1 On each individual node, the other nodes seem 'stuck' using the private IP addresses. *How do I force the nodes to look for each other at their public addresses?* I have fully opened the EC2 security group/firewall as a test to rule out any problems there - and it hasn't helped. Any ideas most appreciated.
Hash function
Hi all, Each node in cassandra ring has a unique identifier nodeID of 128bytes, obtained by a hashing of ? ip address ? Thank you so much for help. Kind Regards.
IP settings across data centers
Hi, We need to deploy cassandra clusters acorss 2 data centers with 6 nodes each. Each data center has just 2 nodes owning public IP to communicate with other data center. Can I set 6 nodes in each data center with private IPs for internal communication and 2 nodes with public IPs for public communication? If so, how to configure the cassandra? If not, where can I modify the cassandra sources to enable it? Shuo Chen
Re: Hash function
It's not the hash of the ip, there's some entropy in there for uniqueness. On Aug 11, 2015 5:05 AM, Thouraya TH thouray...@gmail.com wrote: Hi all, Each node in cassandra ring has a unique identifier nodeID of 128bytes, obtained by a hashing of ? ip address ? Thank you so much for help. Kind Regards.
Should maintenance repairs be run on system related keyspaces?
Hi, I have a question in general with regards to repairs on system related keyspaces. Is it necessary in maintenance repair kicked of via cron should also repair system related keyspaces? Regards,Ken
Re: Should maintenance repairs be run on system related keyspaces?
Hi Ken, As the system ks is local and repair is supposed to fix entropy, I would say no... And you now know how to find out the answer for other keyspace you include in system related keyspaces (If local, then no need, else it depends on the fact that some entropy is acceptable or not, but I would say, yes, do it, system keyspaces should be small enough)... DESC keyspace system; CREATE KEYSPACE system WITH replication = { 'class': 'LocalStrategy' }; I never read or even think about it myself, but I just wrote what makes sense to me. If I am wrong, other will let you know. C*heers, Alain 2015-08-11 16:01 GMT+02:00 K F kf200...@yahoo.com: Hi, I have a question in general with regards to repairs on system related keyspaces. Is it necessary in maintenance repair kicked of via cron should also repair system related keyspaces? Regards, Ken
Re: Should maintenance repairs be run on system related keyspaces?
Hi Ken, the system_auth keyspace should be repaired. However the system keyspace uses a local replication strategy and there is no point is repairing it. Thanks, Prem On Tue, Aug 11, 2015 at 3:01 PM, K F kf200...@yahoo.com wrote: Hi, I have a question in general with regards to repairs on system related keyspaces. Is it necessary in maintenance repair kicked of via cron should also repair system related keyspaces? Regards, Ken
Change from single region EC2 to multi-region
X-post w/ SO: link https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup was as follows: 172.18.100.110 - seed - DC1 / RAC1 172.18.100.111 - DC1 / RAC1 172.18.100.112 - seed - DC1 / RAC2 172.18.100.113 - DC1 / RAC2 All of the above nodes are in East-1D, and I have configured it using the GossipingPropertyFileSnitch (I would rather not use the EC2 specific snitches). listen_address broadcast_address were both set to the node's private IP. I then wanted to expand the cluster into a new region (us-west). Because cross-region private IP communication is not supported in EC2, I attempted to change the settings to have the nodes communicate through their public IPs. listen_address remained set to private IP broadcast_address was changed to the public IP seeds_list IPs were changed to the appropriate public IPs I restarted the nodes one by one expecting them to simply 'work', but now they only see themselves and not the other nodes. nodetool status consistently returns: Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1 UN [public IP of local node] 75.91 GB 256 ? 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1 On each individual node, the other nodes seem 'stuck' using the private IP addresses. *How do I force the nodes to look for each other at their public addresses?* I have fully opened the EC2 security group/firewall as a test to rule out any problems there - and it hasn't helped. Any ideas most appreciated.
Re: Change from single region EC2 to multi-region
Thank you all for the help and ideas. In the end, this was a configuration issue in AWS, and not an issue with Cassandra. Regards On Tue, Aug 11, 2015 at 7:26 PM, Bryan Cheng br...@blockcypher.com wrote: broadcast_address to public ip should be the correct configuration. Assuming your firewall rules are all kosher, you may need to clear gossip state? http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html -- Forwarded message -- From: Asher Newcomer asher...@gmail.com Date: Tue, Aug 11, 2015 at 11:51 AM Subject: Change from single region EC2 to multi-region To: user@cassandra.apache.org X-post w/ SO: link https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup was as follows: 172.18.100.110 - seed - DC1 / RAC1 172.18.100.111 - DC1 / RAC1 172.18.100.112 - seed - DC1 / RAC2 172.18.100.113 - DC1 / RAC2 All of the above nodes are in East-1D, and I have configured it using the GossipingPropertyFileSnitch (I would rather not use the EC2 specific snitches). listen_address broadcast_address were both set to the node's private IP. I then wanted to expand the cluster into a new region (us-west). Because cross-region private IP communication is not supported in EC2, I attempted to change the settings to have the nodes communicate through their public IPs. listen_address remained set to private IP broadcast_address was changed to the public IP seeds_list IPs were changed to the appropriate public IPs I restarted the nodes one by one expecting them to simply 'work', but now they only see themselves and not the other nodes. nodetool status consistently returns: Datacenter: DC1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1 UN [public IP of local node] 75.91 GB 256 ? 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1 On each individual node, the other nodes seem 'stuck' using the private IP addresses. *How do I force the nodes to look for each other at their public addresses?* I have fully opened the EC2 security group/firewall as a test to rule out any problems there - and it hasn't helped. Any ideas most appreciated.