Hi,

If I understand you correctly, you are trying to get a private ip in
us-east speaking to the private ip in us-west.  to make your life
easier, configure your nodes to use hostname of the server.  if it's
in a different region, it will use the public ip (ec2 dns will handle
this for you) and if it's in the same region, it will use the private
ip.  this way you can stop worrying about if you are using the public
or private ip to communicate with another node.  let the aws dns do
the work for you.

just make sure you are using v0.8 with SSL turned on and have the
appropriate security group definitions ...

-sasha



On Wed, Apr 27, 2011 at 1:55 PM, pankajsoni0126
<pankajsoni0...@gmail.com> wrote:
> I have been trying to deploy Cassandra cluster across regions and for that I
> posted this "IP address resolution in MultiDC setup".
>
> But when it is to get nodes talking to each other on different regions say,
> us-east and us-west over private IP's of EC2 nodes I am facing problems.
>
> I am assuming if Cassandra is built for multi-DC setup it should be easily
> deployed with node1's DC1's public IP listed as seed in all nodes in DC2 and
> to gain idea about network topology? I have hit a dud for deployment in such
> scenario.
>
> Or is it there any way possible to use Private IP's for such a scenario in
> EC2, as Public Ip are less secure and costly?

Reply via email to