Re: AWS S3 DNS load balancer

2021-06-15 Thread Christopher Morrow
On Tue, Jun 15, 2021 at 10:33 AM Christopher Morrow 
wrote:

>
> On Tue, Jun 15, 2021 at 8:07 AM Karl Auer  wrote:
>
>> On Tue, 2021-06-15 at 11:37 +, Deepak Jain wrote:
>> > (I’m talking specifically about S3 not Route5x or whatever the DNS
>> > product is).
>>
>> Route53.
>>
>> Not sure what you mean by "S3 DNS". I wasn't aware S3 had any DNS
>> functionality at all... on the other hand, there is much indeed that I
>> do not know.
>>
>>
> Maybe Deepak means:
>   "When I ask for an S3 endpoint I get 1 answer, which is 1 of a set of N.
> Why would
>the 'loadbalancer' send me all N?"
>
> (I don't know a aws s3 url to test this out with, an example from Deepak
> would be handy)
>
>

also, just for grins:
$ while /bin/true; do dig +short s3.amazonaws.com   @ns-63.awsdns-07.com.>>
/tmp/aws; sleep 1; done

after a time:
$ wc -l /tmp/aws
17787 /tmp/aws

and:
$ sort -n  /tmp/aws | uniq -c | sort -rn | wc -l
6457

Some of the results appear ~11 times? most likely only 1x.


> Regards, K.
>>
>> --
>> ~~~
>> Karl Auer (ka...@biplane.com.au)
>> http://www.biplane.com.au/kauer
>>
>>
>>
>>
>>


Comcast routing contact

2021-06-15 Thread Tim Burke
Can someone at AS7922 that handles routing please contact me off list?

Seeing bizarre/asymmetric routing in Houston via Cogent, outbound path goes up 
to Dallas to reach Cogent transit, then back down to Houston, while (proper) 
inbound path departs Cogent transit in Houston to hit Comcast. Can’t seem to 
get someone clueful to investigate through normal support channels.

traceroute to 165.254.62.196 (165.254.62.196), 64 hops max, 52 byte packets

 3  50-225-135-89-static.hfc.comcastbusiness.net (50.225.135.89)  1.038 ms  
0.787 ms  0.816 ms
 4  162.151.134.13 (162.151.134.13)  5.531 ms  1.862 ms  1.820 ms
 5  be-33662-cr02.dallas.tx.ibone.comcast.net (68.86.92.61)  9.498 ms  8.177 ms 
 8.632 ms
 6  be-3312-pe12.1950stemmons.tx.ibone.comcast.net (96.110.34.106)  8.966 ms  
8.618 ms  8.671 ms
 7  * * *
 8  be2763.ccr31.dfw01.atlas.cogentco.com (154.54.28.73)  8.934 ms
be2764.ccr32.dfw01.atlas.cogentco.com (154.54.47.213)  9.816 ms  7.806 ms
 9  be2443.ccr42.iah01.atlas.cogentco.com (154.54.44.229)  11.962 ms  29.051 ms 
 10.605 ms
10  be2991.rcr51.iah04.atlas.cogentco.com (154.54.7.86)  12.173 ms
be3777.rcr52.iah04.atlas.cogentco.com (154.54.7.78)  11.716 ms  11.069 ms
11  te0-0-1-3.nr11.b069355-0.iah04.atlas.cogentco.com (154.24.14.186)  12.402 
ms  11.591 ms
te0-0-1-0.nr11.b069355-0.iah04.atlas.cogentco.com (154.24.14.190)  12.384 ms
12  38.122.156.130 (38.122.156.130)  11.028 ms  10.711 ms  10.744 ms
13  * * *
14  165.254.62.196 (165.254.62.196)  13.146 ms  11.642 ms  11.616 ms

traceroute to 50.225.135.90 (50.225.135.90), 30 hops max, 60 byte packets
 1  165.254.62.194 (165.254.62.194)  0.236 ms  0.185 ms  0.153 ms
 2  ve3.r20.sprntx01.mid.net (64.40.23.64)  0.357 ms  0.362 ms  0.406 ms
 3  te0-0-2-1.nr11.b069355-0.iah04.atlas.cogentco.com (38.122.156.129)  1.164 
ms  1.277 ms  1.394 ms
 4  te0-2-1-12.rcr51.iah04.atlas.cogentco.com (154.24.14.185)  1.271 ms  1.321 
ms  1.290 ms
 5  be3778.ccr42.iah01.atlas.cogentco.com (154.54.85.65)  7.351 ms 
be2992.ccr42.iah01.atlas.cogentco.com (154.54.27.109)  2.160 ms  7.310 ms
 6  be3494.rcr22.iah02.atlas.cogentco.com (154.54.40.54)  2.261 ms  2.382 ms 
be3485.rcr21.iah02.atlas.cogentco.com (154.54.28.86)  2.656 ms
 7  be3632.rcr51.b023723-0.iah02.atlas.cogentco.com (154.24.45.58)  3.331 ms 
be3631.rcr51.b023723-0.iah02.atlas.cogentco.com (154.24.30.38)  3.233 ms 
be3632.rcr51.b023723-0.iah02.atlas.cogentco.com (154.24.45.58)  3.051 ms
 8  be-200-pe01.westwaypark.tx.ibone.comcast.net (173.167.59.41)  3.003 ms  
3.353 ms  3.294 ms
 9  be-3201-cs02.houston.tx.ibone.comcast.net (96.110.39.37)  5.128 ms 
be-3401-cs04.houston.tx.ibone.comcast.net (96.110.39.61)  5.453 ms  5.392 ms
10  96.110.40.118 (96.110.40.118)  7.344 ms 96.110.40.122 (96.110.40.122)  
5.598 ms 96.110.40.118 (96.110.40.118)  7.289 ms
11  96.108.82.66 (96.108.82.66)  10.383 ms  10.696 ms  13.230 ms
12  ae-0-sur03.luthe.tx.houston.comcast.net (68.85.245.117)  10.509 ms  10.590 
ms  10.466 ms
13  50-225-135-90-static.hfc.comcastbusiness.net (50.225.135.90)  11.010 ms  
10.885 ms  11.004 ms

V/r
Tim

Re: AWS S3 DNS load balancer

2021-06-15 Thread Dan Halperin via NANOG
Hi Deepak.

Amazon documents the IPs for their public and private cloud services:
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

(I know this because Batfish uses these in its reachability analysis, for
example, "Make sure all outgoing flows to S3 are permitted by the
firewall".)

Thanks,
Dan

On Tue, Jun 15, 2021 at 5:07 AM Karl Auer  wrote:

> On Tue, 2021-06-15 at 11:37 +, Deepak Jain wrote:
> > (I’m talking specifically about S3 not Route5x or whatever the DNS
> > product is).
>
> Route53.
>
> Not sure what you mean by "S3 DNS". I wasn't aware S3 had any DNS
> functionality at all... on the other hand, there is much indeed that I
> do not know.
>
> Regards, K.
>
> --
> ~~~
> Karl Auer (ka...@biplane.com.au)
> http://www.biplane.com.au/kauer
>
>
>
>
>


RE: AWS S3 DNS load balancer

2021-06-15 Thread Deepak Jain


You can't use DNS to get "all" service IP's of a service like S3 or a CDN for 
traffic engineering purposes. That will not work, ever (for services of such 
scale).

The hackery is assuming you can build a list of service IP's by querying DNS.

> There are a lot of reasons why someone may want this… particularly to 
> manage *other* people geo-basing their transport, but is this a local 
> hack or is this a feature of one of the major auth-DNS packages.
> If its local hackery, trying to manage for it becomes a thankless activity.

CDN's and huge service work like this, and they use the standardized tools like 
DNS they have at their disposal.

Building lists of service IP's from DNS is what the "local-hackery" here is.


Toby explained the proper way to get the IP ranges. It's not via DNS, it never 
was.

--

I'm not sure where you got the idea that I wanted a list of all of their IPs. 
Sorry for any confusion and any offense at using the word "hackery" in a way 
you deemed inappropriate. 

Deepak


RE: AWS S3 DNS load balancer

2021-06-15 Thread Deepak Jain



I've just taken a squiz at an S3-based website we have, and via the S3 URL it 
is a CNAME with a 60-secod TTL pointing at a set of A records with 5-second 
TTLs.

Any one dig returns the CNAME and a single IP address:

dig our-domain.s3-website-ap-southeast-2.amazonaws.com.
our-domain.s3-website-ap-southeast-2.amazonaws.com. 14 IN CNAME s3-
website-ap-southeast-2.amazonaws.com.
s3-website-ap-southeast-2.amazonaws.com. 5 IN A 52.95.134.145

If the query is multiply repeated, the returned IP address changes, roughly 
every five seconds.

What's interesting is the name attached to the A records, which does not 
include "our-domain". It seems to be a record pointing to ALL S3 websites in 
the region. And all of the addresses I saw reverse-resolve to that one name. So 
there is definitely some under-the-bonnet magic discrimination going on.

In Route53 the picture is very different, with the published website host name 
(think "our-domain.com.au") resolving to four IP addresses that are all 
returned in the response to a single dig query. There is an A-ALIAS (a 
non-standard AWS record type) that points to a CloudFront distribution that has 
the relevant S3 bucket as its origin.

Using the CNAME bypasses the CloudFront distribution unless steps are taken to 
forbid direct access to the bucket. It would be usual to use (and enforce) 
access via CloudFront, if for no other reason than to provide for HTTPS access. 

---

So, depending on what query you make... you get very different answers. For 
example. If you try s3.amazon.com you get a CNAME to a rewrite.amazon.com which 
seems reasonable for any subdomain request that they would have a better 
response for. 

I don't remember, and they may be moving to deterministic subdomains as you've 
shown above, and only "legacy" uses go to s3.amazonaws.com. I remember hearing 
a big uproar about it. Perhaps an AWS person will chime in with some color on 
this.

So deterministic subdomain to a group of relatively deterministic endpoints, 
even round-robin, makes sense to me as in... "usual in the practice of the 
art." Even if those systems end up being load balancers for other systems 
behind them.

The s3.amazonaws.com is different than that. I'm guessing that no one (else) 
uses this sort of single IP from a pool trick and therefore it's not standard. 
Further, given that AWS appears to be moving *back* to the traditional way of 
doing things, there must be undesirable limitations to this model.

[just spitballing here]

Deepak


Re: AWS S3 DNS load balancer

2021-06-15 Thread Lukas Tribus
Hello,


> AWS is doing Geo-based load balancing and spitting things out,
> and networks with eyeballs are doing their own things for traffic
> management and trying to do shortest paths to things – and responsible
> operators want to minimize the non-desirable and non-deterministic
> behaviors.

You can't use DNS to get "all" service IP's of a service like S3 or a
CDN for traffic engineering purposes. That will not work, ever (for
services of such scale).

The hackery is assuming you can build a list of service IP's by querying DNS.


> There are a lot of reasons why someone may want this… particularly
> to manage *other* people geo-basing their transport, but is this a
> local hack or is this a feature of one of the major auth-DNS packages.
> If its local hackery, trying to manage for it becomes a thankless activity.

CDN's and huge service work like this, and they use the standardized
tools like DNS they have at their disposal.

Building lists of service IP's from DNS is what the "local-hackery" here is.


Toby explained the proper way to get the IP ranges. It's not via DNS,
it never was.


Lukas


Re: AWS S3 DNS load balancer

2021-06-15 Thread Karl Auer
On Tue, 2021-06-15 at 10:33 -0400, Christopher Morrow wrote:
> Maybe Deepak means:
>   "When I ask for an S3 endpoint I get 1 answer, which is 1 of a set
> of N.
> Why would
>the 'loadbalancer' send me all N?"

I've just taken a squiz at an S3-based website we have, and via the S3
URL it is a CNAME with a 60-secod TTL pointing at a set of A records
with 5-second TTLs.

Any one dig returns the CNAME and a single IP address:

dig our-domain.s3-website-ap-southeast-2.amazonaws.com.
our-domain.s3-website-ap-southeast-2.amazonaws.com. 14 IN CNAME s3-
website-ap-southeast-2.amazonaws.com.
s3-website-ap-southeast-2.amazonaws.com. 5 IN A 52.95.134.145

If the query is multiply repeated, the returned IP address changes,
roughly every five seconds.

What's interesting is the name attached to the A records, which does
not include "our-domain". It seems to be a record pointing to ALL S3
websites in the region. And all of the addresses I saw reverse-resolve
to that one name. So there is definitely some under-the-bonnet magic
discrimination going on.

In Route53 the picture is very different, with the published website
host name (think "our-domain.com.au") resolving to four IP addresses
that are all returned in the response to a single dig query. There is
an A-ALIAS (a non-standard AWS record type) that points to a CloudFront
distribution that has the relevant S3 bucket as its origin.

Using the CNAME bypasses the CloudFront distribution unless steps are
taken to forbid direct access to the bucket. It would be usual to use
(and enforce) access via CloudFront, if for no other reason than to
provide for HTTPS access. 

Regards, K.


-- 
~~~
Karl Auer (ka...@biplane.com.au)
http://www.biplane.com.au/kauer





Re: AWS S3 DNS load balancer

2021-06-15 Thread Lukas Tribus
Hello,

On Tue, 15 Jun 2021 at 13:37, Deepak Jain  wrote:
> Is this a “normal” or expected solution or just some local hackery?

It's absolutely normal and expected for a huge service like this to
keep round robin at the DNS server side. YMMV with client side DNS
based round robin (Amazon needs to be in control, not your client
application) and steering traffic from one edge location or host to
another is perfectly legitimate. Also likely as a service provider of
such a huge service you want to keep breaking connections from
applications with clearly hardcoded (or "resolve at startup only") IP
addresses, so that client applications never use this approach (in the
long term at least). After all, as a service provider you want to
avoid hitting the news cycle for a legitimate DNS change, just because
you are not doing it very often and that change triggered a myriad of
outages because of broken customer applications at the same time. So
they just do it often or all the time.

Amazon needs to stay in control of what edge nodes and locations the
clients are hitting, just like CDN's and other endpoints with major
traffic volumes.


None of this is local hackery, it's just basic DNS.


Lukas



Lukas


RE: AWS S3 DNS load balancer

2021-06-15 Thread Deepak Jain

Maybe Deepak means:
  "When I ask for an S3 endpoint I get 1 answer, which is 1 of a set of N. Why 
would
   the 'loadbalancer' send me all N?"

(I don't know a aws s3 url to test this out with, an example from Deepak would 
be handy)

Regards, K.

--
~~~
Karl Auer (ka...@biplane.com.au)
http://www.biplane.com.au/kauer



First, thanks for translating “Deepak” for everyone.
Second, I was in the back of a car, so I didn’t have a convenient dig prompt. I 
considered it, but went for it anyway. I’ll blame the time of day and a lack of 
caffeine.
You’ll see from the time stamps that these were done in rapid succession at a 
command prompt. Even though I used 8.8.8.8, I can replicate the results with a 
single recursive server. I just wanted something easy for anyone to replicate.
[deleted the dig information, for giggles run:
dig @8.8.8.8 s3.amazonaws.com a few times in rapid succession.
The TLDR is that I got this set of IPs. With more runs, I might get more. There 
is an obvious operational impact here, say AWS is doing Geo-based load 
balancing and spitting things out, and networks with eyeballs are doing their 
own things for traffic management and trying to do shortest paths to things – 
and responsible operators want to minimize the non-desirable and 
non-deterministic behaviors.
s3.amazonaws.com.   3   IN  A   52.216.105.101
s3.amazonaws.com.   1   IN  A   52.216.171.13
s3.amazonaws.com.   2   IN  A   52.216.236.45
s3.amazonaws.com.   2   IN  A   52.216.105.101
s3.amazonaws.com.   2   IN  A   52.216.138.197
s3.amazonaws.com.   2   IN  A   52.217.107.14
s3.amazonaws.com.   3   IN  A   52.216.206.53
s3.amazonaws.com.   2   IN  A   52.217.129.32
s3.amazonaws.com.   1   IN  A   52.216.236.45
s3.amazonaws.com.   3   IN  A   52.216.243.22
The question is how are they spitting out 1 IP from their pool 
programmatically? There are a lot of reasons why someone may want this… 
particularly to manage *other* people geo-basing their transport, but is this a 
local hack or is this a feature of one of the major auth-DNS packages. If its 
local hackery, trying to manage for it becomes a thankless activity. If there 
is a standard or published method, then the feedback loop stuff can be 
curtailed.
Thanks again!
Deepak



Re: AWS S3 DNS load balancer

2021-06-15 Thread Christopher Morrow
On Tue, Jun 15, 2021 at 8:07 AM Karl Auer  wrote:

> On Tue, 2021-06-15 at 11:37 +, Deepak Jain wrote:
> > (I’m talking specifically about S3 not Route5x or whatever the DNS
> > product is).
>
> Route53.
>
> Not sure what you mean by "S3 DNS". I wasn't aware S3 had any DNS
> functionality at all... on the other hand, there is much indeed that I
> do not know.
>
>
Maybe Deepak means:
  "When I ask for an S3 endpoint I get 1 answer, which is 1 of a set of N.
Why would
   the 'loadbalancer' send me all N?"

(I don't know a aws s3 url to test this out with, an example from Deepak
would be handy)


> Regards, K.
>
> --
> ~~~
> Karl Auer (ka...@biplane.com.au)
> http://www.biplane.com.au/kauer
>
>
>
>
>


Re: AWS S3 DNS load balancer

2021-06-15 Thread nanog
The IP addresses for S3 do not change very often, and are region specific (as 
you would expect).

You are correct that this can cause problems for clients that never re-resolve 
(eg Java networkaddress.cache.ttl=-1)

You may be interested in the (periodically updated) list of AWS IP ranges by 
using their IP ranges JSON API. Refer to:
* https://ip-ranges.amazonaws.com/ip-ranges.json
* https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

To get all S3 IP ranges currently in use:
"""
curl -sf 'https://ip-ranges.amazonaws.com/ip-ranges.json' \
| jq '.prefixes | map(select(.service == "S3"))'
"""

To get all S3 IP ranges in your region:
"""
 curl -sf 'https://ip-ranges.amazonaws.com/ip-ranges.json' \
| jq '.prefixes | map(select(.service == "S3" and .region == "eu-central-1"))'
"""

These ranges are not (to my knowledge) queryable via DNS.

In terms of this as a general behaviour, it is not uncommon. If I remember 
correctly this is how Route53 weighted records are implemented. So at least 
anyone using that feature of Route53 would be doing the same.

Met vriendelijke groeten,

Toby Lorne

‐‐‐ Original Message ‐‐‐

On Tuesday, June 15th, 2021 at 13:37, Deepak Jain  wrote:

> They seem to do something a little unusual where every DNS request provides a 
> different IP out of a small pool with those IPs not changing very frequently. 
> (I’m talking specifically about S3 not Route5x or whatever the DNS product 
> is).
>
> Basically like round robin, but instead of providing all of the IPs they are 
> only offering one. This eliminates options for the client DNS resolvers, but 
> may make some things more deterministic.
>
> Is this a “normal” or expected solution or just some local hackery?
>
> Thanks in advance,
>
> DJ


Re: AWS S3 DNS load balancer

2021-06-15 Thread Karl Auer
On Tue, 2021-06-15 at 11:37 +, Deepak Jain wrote:
> (I’m talking specifically about S3 not Route5x or whatever the DNS
> product is).

Route53.

Not sure what you mean by "S3 DNS". I wasn't aware S3 had any DNS
functionality at all... on the other hand, there is much indeed that I
do not know.

Regards, K.

-- 
~~~
Karl Auer (ka...@biplane.com.au)
http://www.biplane.com.au/kauer






AWS S3 DNS load balancer

2021-06-15 Thread Deepak Jain

They seem to do something a little unusual where every DNS request provides a 
different IP out of a small pool with those IPs not changing very frequently. 
(I’m talking specifically about S3 not Route5x or whatever the DNS product is).

Basically like round robin, but instead of providing all of the IPs they are 
only offering one. This eliminates options for the client DNS resolvers, but 
may make some things more deterministic.

Is this a “normal” or expected solution or just some local hackery?

Thanks in advance,

DJ

Re: aggregation tool that allows a bit of fuzz to aggregating ?

2021-06-15 Thread Deepak Jain
We use Perl to accomplish this kind of thing.

We blackhole /32s, when we have “enough” of them in the same /24, we remove the 
/32s after inserting a covering /24. This is a 4 line script, along the same 
lines of the sed and python suggestions.

Our threshold is pretty low. If we see 4 simultaneous bad actors from the same 
/24 it’s gone. But we have a very fair process of putting them back into use, 
think fail2ban.

Best,

Deepak


On Jun 14, 2021, at 3:51 AM, Chris Hartley  wrote:


I guess something like this... maybe? Surely someone has already done this much 
better, but I thought it might be a fun puzzle.

# Let's call it aggregate.py.  You should test/validate this and not trust it 
at all because I don't.  It does look like it works, but I can't promise 
anything like that.  This was "for fun."  For me in my world, it's not a 
problem that needs solving, but if it helps someone, that'd be pretty cool.  No 
follow-up questions, please.

./aggregate.py gen 10 ips.txt # Make up some random IPs for testing
./aggregate.py aggregate 2 ips.txt # Aggregate...  second argument is the 
"gap", third is the filename...

Most are still going to be /32s.
Some might look like this - maybe even bigger:
27.151.199.176/29
33.58.49.184/29
40.167.88.192/29
63.81.88.112/28 # This is your example set of IPs with 
a gap (difference) of 2.
200.42.160.124/30

"max gap" is the distance between IP addresses that can be clustered... an 
improvement might include "coverage" - a parameter indicating how many IPs must 
appear (ratio) in a cluster to create the aggregate (more meaningful with 
bigger gaps).

#!/your/path/to/python
import random
import sys

def inet_aton(ip_string):
   octs = ip_string.split('.')
   n =  int(int(octs[0]) << 24) + int(int(octs[1]) << 16) + 
int(int(octs[2]) << 8) + int(octs[3])
   return n

def inet_ntoa(ip):
   octs = ( ip >> 24, (ip >> 16 & 255), (ip >> 8) & 255, ip & 255 )
   return str(octs[0]) + "." + str(octs[1]) + "." + str(octs[2]) + "." + 
str(octs[3])

def gen_ips(num):
ips = []
for x in range(num):
ips.append(inet_ntoa(random.randint(0,pow(2,32)-1)))
# To make sure we have at least SOME nearlyconsecutive IPs...
ips += 
"63.81.88.116,63.81.88.118,63.81.88.120,63.81.88.122,63.81.88.124,63.81.88.126".split(",")
 # I added your example IPs.
return ips

def write_random_ips(num,fname):
ips = gen_ips(int(num))
f = open(fname,'w')
for ip in ips:
f.write(ip+'\n')
f.close()

def read_ips(fname):
return open(fname,'r').read().split('\n')

class Cluster():
def __init__(self):
self.ips = []
def add_ip(self,ip):
self.ips.append(ip)

def find_common_bits(ipa,ipb):
for bits in range(0,32):
mask = pow(2,32)-1 << bits & (pow(2,32)-1)

if ipa & mask == ipb & mask:
return 32-bits
else:
pass # print(f"{ipa} & (pow(2,{bits})-1) == {ipa & (pow(2,bits)-1)} 
==!=== {ipb} & (pow(2,{bits})-1) == {ipb & (pow(2,bits)-1)}")

if len(sys.argv) == 4 and sys.argv[1] == "generate":
write_random_ips(sys.argv[2],sys.argv[3])
elif len(sys.argv) == 4 and sys.argv[1] == "aggregate": # TODO: Let's imagine a 
"coverage" field that augments the max_gap field... does the prefix cover too 
many IPs?
max_gap = int(sys.argv[2])
fname = sys.argv[3]

ips = [ inet_aton(ip) for ip in read_ips(fname) if ip!='' ] # ... it'd be a 
good idea to make sure it looks like an IP.  Oh, this only does IPv4 btw.

ips.sort()

clusters=[Cluster()] # Add first (empty) cluster.. is this necessary?  Who 
cares, moving on
last_ip=None
for ip in ips:
if last_ip != None:
#print(f"Gap of {ip-last_ip} between {ip} and {last_ip}... 
{inet_ntoa(ip)} / {inet_ntoa(last_ip)}")
if ip - last_ip <= max_gap:
#print(f"Gap of {ip-last_ip} between {ip} and {last_ip}...")
clusters[-1].add_ip(ip)
else:
cluster=Cluster()
cluster.add_ip(ip)
clusters.append(cluster)
last_ip = ip

for cluster in clusters:
if len(cluster.ips) == 0:
continue
if len(cluster.ips) > 1:
first_ip=cluster.ips[0]
last_ip=cluster.ips[-1]
num_bits = find_common_bits(first_ip,last_ip)
mask = pow(2,32)-1 << (32-num_bits) & (pow(2,32)-1)
network = first_ip & mask
print(f"{inet_ntoa(network)}/{num_bits}")
else:
print(f"{inet_ntoa(cluster.ips[0])}/32")
else:
print("Usage:")
print("{0} generate [number of IPs] [file name] # Generate specified number 
of IPs, save to [file name]")
print("{0} aggregate [max gap] [file name] # Aggregate prefixes based on 
overlapping subnets/IPs per the max gap permitted...")