For the ones that need access by public IP we have not found a way to automate 
it.  Would be curious to know if anyone else has been able to that.
In the case of access by private IP we just specify security group as the 
source.

From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Wednesday, June 05, 2013 5:45 PM
To: user@cassandra.apache.org
Subject: Re: Looking for a fully working AWS multi DC configuration.

Do you open all these nodes one by one on every Security Group in each region 
every time you add a node or did you manage to automate it somehow ?

2013/6/5 Dan Kogan <d...@iqtell.com<mailto:d...@iqtell.com>>
Hi,

We are using a very similar configuration.  From our experience, Cassandra 
nodes in the same DC need access over both public and private IP on the storage 
port (7000/7001).  Nodes from other DC will need access over public IP on the 
storage port.
All Cassandra nodes also need access over the public IP on the Thrift port 
(9160).

Dan

From: Alain RODRIGUEZ [mailto:arodr...@gmail.com<mailto:arodr...@gmail.com>]
Sent: Wednesday, June 05, 2013 9:49 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Looking for a fully working AWS multi DC configuration.

Hi,

We use to work on a single DC (EC2Snitch / SimpleStrategy). For latency reason 
we had top open a new DC in the US (us-east). We run C* 1.2.2. We don't use VPC.

Now we use:
- 2 DC (eu-west, us-east)
- EC2MultiRegionSnitch / NTS
- public IPs as broadcast_address and seeds
- private IPs as listen_address

Yet we are experimenting some troubles (node can't reach itself, Could not 
start register mbean in JMX...), mainly because of the use of public IPs and 
the AWS inter-region communication.

If someone has successfully setup this kind of cluster, I would like to know, 
if our configuration is correct and if I am missing something.

I also would like to know what ports I have to open and either where I have to 
open them from.

Any insight would be greatly appreciated.

Reply via email to