Re: Change from single region EC2 to multi-region

2015-08-11 Thread Prem Yadav
1) There are ways to connect two VPCs using VPN.
2) About the connectivity using public IP. Can you ping the one public ip
from another one in a different region.
If ping works, please check port connectivity using telnet. You can start a
temp server on a port using netcat. If connectivity fails, you need to
looks into your routing tables to allow connectivity on the public ip
addresses.

On Tue, Aug 11, 2015 at 7:51 PM, Asher Newcomer asher...@gmail.com wrote:

 X-post w/ SO: link
 https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region

 I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup
 was as follows:

 172.18.100.110 - seed - DC1 / RAC1

 172.18.100.111 - DC1 / RAC1

 172.18.100.112 - seed - DC1 / RAC2

 172.18.100.113 - DC1 / RAC2

 All of the above nodes are in East-1D, and I have configured it using the
 GossipingPropertyFileSnitch (I would rather not use the EC2 specific
 snitches).

 listen_address  broadcast_address were both set to the node's private IP.

 I then wanted to expand the cluster into a new region (us-west). Because
 cross-region private IP communication is not supported in EC2, I attempted
 to change the settings to have the nodes communicate through their public
 IPs.

 listen_address remained set to private IP
 broadcast_address was changed to the public IP
 seeds_list IPs were changed to the appropriate public IPs

 I restarted the nodes one by one expecting them to simply 'work', but now
 they only see themselves and not the other nodes.

 nodetool status consistently returns:

 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 -- Address Load Tokens Owns Host ID Rack
 DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1
 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1
 UN [public IP of local node] 75.91 GB 256 ?
 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1
 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1

 On each individual node, the other nodes seem 'stuck' using the private IP
 addresses.

 *How do I force the nodes to look for each other at their public
 addresses?*

 I have fully opened the EC2 security group/firewall as a test to rule out
 any problems there - and it hasn't helped.

 Any ideas most appreciated.



Re: Change from single region EC2 to multi-region

2015-08-11 Thread Bryan Cheng
broadcast_address to public ip should be the correct configuration.
Assuming your firewall rules are all kosher, you may need to clear gossip
state?
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html

-- Forwarded message --
From: Asher Newcomer asher...@gmail.com
Date: Tue, Aug 11, 2015 at 11:51 AM
Subject: Change from single region EC2 to multi-region
To: user@cassandra.apache.org


X-post w/ SO: link
https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region

I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup
was as follows:

172.18.100.110 - seed - DC1 / RAC1

172.18.100.111 - DC1 / RAC1

172.18.100.112 - seed - DC1 / RAC2

172.18.100.113 - DC1 / RAC2

All of the above nodes are in East-1D, and I have configured it using the
GossipingPropertyFileSnitch (I would rather not use the EC2 specific
snitches).

listen_address  broadcast_address were both set to the node's private IP.

I then wanted to expand the cluster into a new region (us-west). Because
cross-region private IP communication is not supported in EC2, I attempted
to change the settings to have the nodes communicate through their public
IPs.

listen_address remained set to private IP
broadcast_address was changed to the public IP
seeds_list IPs were changed to the appropriate public IPs

I restarted the nodes one by one expecting them to simply 'work', but now
they only see themselves and not the other nodes.

nodetool status consistently returns:

Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1
DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1
UN [public IP of local node] 75.91 GB 256 ?
6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1
DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1

On each individual node, the other nodes seem 'stuck' using the private IP
addresses.

*How do I force the nodes to look for each other at their public addresses?*

I have fully opened the EC2 security group/firewall as a test to rule out
any problems there - and it hasn't helped.

Any ideas most appreciated.


Re: Change from single region EC2 to multi-region

2015-08-11 Thread John Wong
Use VPC Peering rather than VPN, More reliable.

On Tue, Aug 11, 2015 at 5:14 PM, Prem Yadav ipremya...@gmail.com wrote:

 1) There are ways to connect two VPCs using VPN.
 2) About the connectivity using public IP. Can you ping the one public ip
 from another one in a different region.
 If ping works, please check port connectivity using telnet. You can start
 a temp server on a port using netcat. If connectivity fails, you need to
 looks into your routing tables to allow connectivity on the public ip
 addresses.

 On Tue, Aug 11, 2015 at 7:51 PM, Asher Newcomer asher...@gmail.com
 wrote:

 X-post w/ SO: link
 https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region

 I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC.
 Setup was as follows:

 172.18.100.110 - seed - DC1 / RAC1

 172.18.100.111 - DC1 / RAC1

 172.18.100.112 - seed - DC1 / RAC2

 172.18.100.113 - DC1 / RAC2

 All of the above nodes are in East-1D, and I have configured it using the
 GossipingPropertyFileSnitch (I would rather not use the EC2 specific
 snitches).

 listen_address  broadcast_address were both set to the node's private IP.

 I then wanted to expand the cluster into a new region (us-west). Because
 cross-region private IP communication is not supported in EC2, I attempted
 to change the settings to have the nodes communicate through their public
 IPs.

 listen_address remained set to private IP
 broadcast_address was changed to the public IP
 seeds_list IPs were changed to the appropriate public IPs

 I restarted the nodes one by one expecting them to simply 'work', but now
 they only see themselves and not the other nodes.

 nodetool status consistently returns:

 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 -- Address Load Tokens Owns Host ID Rack
 DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1
 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1
 UN [public IP of local node] 75.91 GB 256 ?
 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1
 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1

 On each individual node, the other nodes seem 'stuck' using the private
 IP addresses.

 *How do I force the nodes to look for each other at their public
 addresses?*

 I have fully opened the EC2 security group/firewall as a test to rule out
 any problems there - and it hasn't helped.

 Any ideas most appreciated.





Change from single region EC2 to multi-region

2015-08-11 Thread Asher Newcomer
X-post w/ SO: link
https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region

I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup
was as follows:

172.18.100.110 - seed - DC1 / RAC1

172.18.100.111 - DC1 / RAC1

172.18.100.112 - seed - DC1 / RAC2

172.18.100.113 - DC1 / RAC2

All of the above nodes are in East-1D, and I have configured it using the
GossipingPropertyFileSnitch (I would rather not use the EC2 specific
snitches).

listen_address  broadcast_address were both set to the node's private IP.

I then wanted to expand the cluster into a new region (us-west). Because
cross-region private IP communication is not supported in EC2, I attempted
to change the settings to have the nodes communicate through their public
IPs.

listen_address remained set to private IP
broadcast_address was changed to the public IP
seeds_list IPs were changed to the appropriate public IPs

I restarted the nodes one by one expecting them to simply 'work', but now
they only see themselves and not the other nodes.

nodetool status consistently returns:

Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1
DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1
UN [public IP of local node] 75.91 GB 256 ?
6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1
DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1

On each individual node, the other nodes seem 'stuck' using the private IP
addresses.

*How do I force the nodes to look for each other at their public addresses?*

I have fully opened the EC2 security group/firewall as a test to rule out
any problems there - and it hasn't helped.

Any ideas most appreciated.


Re: Change from single region EC2 to multi-region

2015-08-11 Thread Asher Newcomer
Thank you all for the help and ideas.

In the end, this was a configuration issue in AWS, and not an issue with
Cassandra.

Regards

On Tue, Aug 11, 2015 at 7:26 PM, Bryan Cheng br...@blockcypher.com wrote:

 broadcast_address to public ip should be the correct configuration.
 Assuming your firewall rules are all kosher, you may need to clear gossip
 state?
 http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html


 -- Forwarded message --
 From: Asher Newcomer asher...@gmail.com
 Date: Tue, Aug 11, 2015 at 11:51 AM
 Subject: Change from single region EC2 to multi-region
 To: user@cassandra.apache.org


 X-post w/ SO: link
 https://stackoverflow.com/questions/31949043/cassandra-change-from-single-region-ec2-to-multi-region

 I have (had) a working 4 node Cassandra cluster setup in an EC2 VPC. Setup
 was as follows:

 172.18.100.110 - seed - DC1 / RAC1

 172.18.100.111 - DC1 / RAC1

 172.18.100.112 - seed - DC1 / RAC2

 172.18.100.113 - DC1 / RAC2

 All of the above nodes are in East-1D, and I have configured it using the
 GossipingPropertyFileSnitch (I would rather not use the EC2 specific
 snitches).

 listen_address  broadcast_address were both set to the node's private IP.

 I then wanted to expand the cluster into a new region (us-west). Because
 cross-region private IP communication is not supported in EC2, I attempted
 to change the settings to have the nodes communicate through their public
 IPs.

 listen_address remained set to private IP
 broadcast_address was changed to the public IP
 seeds_list IPs were changed to the appropriate public IPs

 I restarted the nodes one by one expecting them to simply 'work', but now
 they only see themselves and not the other nodes.

 nodetool status consistently returns:

 Datacenter: DC1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 -- Address Load Tokens Owns Host ID Rack
 DN 172.18.100.112 ? 256 ? 968aaa8a-32b7-4493-9747-3df1c3784164 r1
 DN 172.18.100.113 ? 256 ? 8e03643c-9db8-4906-aabc-0a8f4f5c087d r1
 UN [public IP of local node] 75.91 GB 256 ?
 6fdcc85d-6c78-46f2-b41f-abfe1c86ac69 RAC1
 DN 172.18.100.110 ? 256 ? fb7b78a8-d1cc-46fe-ab18-f0d3075cb426 r1

 On each individual node, the other nodes seem 'stuck' using the private IP
 addresses.

 *How do I force the nodes to look for each other at their public
 addresses?*

 I have fully opened the EC2 security group/firewall as a test to rule out
 any problems there - and it hasn't helped.

 Any ideas most appreciated.