Matt,

Wow, that was fast! Thanks for the response. Definitely helps get my head 
around these concepts.

I'll check the Ips again across instance restarts and reboots. I'm pretty sure 
about this ;-)
See your point about using chef and just spinning up new instances. In this 
case DNS should not be such a drama then. Will stick with this for now, since 
Elastic IPs are more work.
I'll give /etc/hosts a go and see what I end up with, just for a demo system.
Looking forward to the all in one build. Would be nice to play with it.


Thanks
Des Hartman




Des,

Thanks for your email.  Your understanding is generally correct.  A couple of 
comments.

·         On point 3, my experience is that, across a reboot, the public and 
private IP addresses are preserved, but across a stop and start, both the 
public and private IP addresses are replaced.

·         Bear in mind that in a cloud deployment, there is rarely a good 
reason to reboot or restart a node - you'd actually be better off spinning up a 
replacement and then spinning down (and destroying) the original.

·         On point 6, yes, whenever a server is added or removed, DNS must be 
updated.  The automated chef scripts do this automatically, but obviously with 
the manual scripts it must be done by hand.

With regard to caching, we currently set the TTL to 300s (5 minutes) which 
seems to work OK but your proposal of using elastic IPs should work too.  
Please let us know how you get on if you do go this route.

In response to your questions.

1.       The local and public IP can be gleaned from the instance, although 
it's not quite as simple as ifconfig and hostname - ifconfig will get you the 
local IP, but hostname doesn't always return a valid FQDN (e.g. "hostname 
--fqdn" on one of my boxes returns "ip-10-151-20-48.ec2.internal" which is 
resolvable within EC2 but not on the public internet) and this wouldn't get you 
the public IP.  Instead, you can issue requests to URLs such as 
http://169.254.169.254/latest/meta-data/public-ipv4 to get this.  As part of 
some work to build an all-in-one Clearwater image, we have created a 
clearwater-auto-config package that uses this (for which you can see the pull 
request at https://github.com/Metaswitch/clearwater-infrastructure/pull/10).

2.       As discussed above, in our experience caching is not an issue because 
we set the cache period fairly low.

3.       On AWS, we use Route 53.  In other environments, we have rolled our 
own using BIND, although it's a lot more fiddly to set up.  Route 53 supports 
local IP DNS entries - you don't strictly speaking need them to be private.  
Yes, it does require a TLD.

4.       Yes, I suspect /etc/hosts could be used for single instance demo 
systems, although I'm not sure we've tried it - please let me know how you get 
on if you go this route.

I hope that helps.

Cheers,

Matt

From: 
[email protected]<mailto:[email protected]>
 [mailto:[email protected]] On Behalf Of Des 
Hartman
Sent: 11 July 2013 12:34
To: 
[email protected]<mailto:[email protected]>
Subject: [Clearwater] EC2 public/private IPs and DNS usage

Hi,

Been trying to wrap my head around how DNS is used in Clearwater with multiple 
instances. This is purely for purposes of understanding the concept. I have not 
looked at the chef installation and operation procedures, but I assume the 
concept will be the same?

Here is what I understand so far and please correct me if I am missing 
something here:

  1.  Each server will have it's own clearwater/config configuration 
referencing the server function as a FQDN, e.g. sprout.tld, homer.tld, etc. 
This will be the exact same config across all servers and instances of servers 
except for the TLD, private and public IP.
  2.  The reason DNS is used is because multiple answers will be returned for 
each server type and "/etc/hosts" cannot be used to load balance responses.
  3.  Each server instance will have it's own public and private EC2 IP when 
created.
  4.  The private IP address changes each time the server is restarted, but the 
public IP remains across restarts.
  5.  Each server instance public or private IP has to be in a central DNS 
server.
  6.  Each time a new server is added or removed, the DNS entries have to be 
updated. Cache will be an issue here.
>From this it seems that there is a lot of adding and removing of servers from 
>the central DNS and that cache would become an issue? As an alternative I have 
>seem some ideas around assigning Elastic IPs to instances and keeping the 
>elastic IPs in the DNS entries. This way there is no caching issue when 
>assigning the same elastic IP to to a new instance that has failed.

So some of my questions are:

  1.  Can the clearwater config file not somehow be simplified and the private 
and public IP be gleamed from the instance itself? Simple ifconfig and hostname 
would return all three values.
  2.  If the private IP address changes each time an instance is restarted, 
this requires reconfiguration of the config file and the DNS entry? Will 
caching be an issue?
  3.  What is best to use for DNS? Route 53 or a separate DNS server? Route 53 
does not seem to support private IP DNS entries (can be publicly accessed) and 
requires a public TLD.
  4.  Can /etc/hosts be used for simple single instance demo systems instead of 
DNS?
Lots of questions I know, but important to understand ;-)

Thanks
Des Hartman

_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/listinfo/clearwater

Reply via email to