G'day...
Just because I'm being pedantic...
Oscar> Number of IPV4 addresses = 255*255*255*255 * 50 bytes (your allocation)
Oscar> = 4,228Mb * 50 =
Oscar> 202,280MB
Oscar>
Oscar> Number of IPV6 addresses = we can only imagine this number
Number of IPv4 addresses is 2^32 = 4294967296
Allocation per IP address is 32 bits = 4 bytes
This *excludes* actual FQDN - if we allow 128 characters for FQDN mapping (a more realistic limit) = 128 bytes
(Got to be honest, I'm not sure if there is a limit on a domain name length or what it is.)
Total bytes consumed per IP/FQDN mapping = 128 + 4 = 132
Minimum space required = 4294967296 * 132 bytes
= 137438953472 bytes
= 134217728 KB
= 131072 MB
= 128 GB
However, we are more likely going to need to allow 256 bytes not 128 for each mapping (maximum line length) so total byte consumption per IP/FQDN mapping will be 260 bytes.
Minimum space required = 4294967296 * 260 bytes
= 1116691496960 bytes
= 1090519040 KB
= 1064960 MB
= 1040 GB
Now for IPv6 uses a 128 bit address space (IPv4 uses 32, so 96 more bits making 2^96 more addresses available)
Number of IPv6 addresses is 2^128 = 3.4028236692093846346337460743177e+38 ... I don't think I need to continue...
Btw... we can imagine 2^128 (it's easiest to remember it in that form...) ... and for that matter all numbers *are* imagined, not real... Numbers are imaginary constructs to describe quantity and don't actually exist (ie... You can see 4 ducks, but you can't see a 4, only a representation of it.)
Warmest regards
Mike
---
Michael S. E. Kraus
Network Administrator
Capital Holdings Group (NSW) Pty Ltd
p: (02) 9955 8000
| "Oscar Plameras" <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED] 30/06/2003 02:52 PM
|
To: <[EMAIL PROTECTED]> cc: Sydney Linux Users Group <[EMAIL PROTECTED]> Subject: Re: [SLUG] Opinions sought: Exim vs Sendmail |
From: <[EMAIL PROTECTED]>
> On 29 Jun, Oscar Plameras wrote:
> > Ideally, one would want all list to be stored in the local Memory but
we
> > know this is impossible and with the internet growing in leaps and
bounds
> > the list is growing bigger and faster by the day. Also, you would want
a
> > DNS software that predicts the information that will be requested just
in
> > time when it is required. Again this is a mammoth task and out there
our
> > technical friends have been trying.
>
> Well, I can't see *any* difference between this problem and the
> classical caching problem. Your traffic typically has some coherency
> simply because communications tend to be between people who are in some
> kind of dialogue.
>
> It seems to me that the cost of storing an IP address as a string, plus
> a word for the decimal IP address, should cost roughly 50 bytes. I.e.
> I'd guess you should be able to cache about 20,000 addresses / Mb. I'd
> be surprised if any but very large organisations would receive email
> from more than that number of *domains* per day.
>
The reason why it is impossible to store all list in local CPU Memory
concurrently is, first, because of the physical limitations of the hardware
under the current state of technologies.
The reason is as follows:
Number of IPV4 addresses = 255*255*255*255 * 50 bytes (your allocation)
= 4,228Mb * 50 =
202,280MB
Number of IPV6 addresses = we can only imagine this number
If you have such a list, imagine the amount of cpu time required to search
such a list every time an address is to be found.
This is one reason why DNS BIND adopted its methodology and strategy.
It is meant to prevent having a list that enlarges to such a huge list with
out a way to control. The methodology and strategy is a compromise.
And the Sysadmin decides how much to compromise by way of
manipulating the configuration.
Another reason why there is this limitation is that the complete list
is scattered among DNS servers all across the Internet at any given time,
the list changes every minute (names change, addresses change, addresses
removed, addresses added and so on) and that a local DNS only knows
about those addresses previously queried for which this local DNS and its
authoritative DNS are answerable. If an address was not previously queried
it will not get included in the cache.
A single name change will instantaneously make a local list inconsistent
with reality. And there are hundreds, perhaps thousands of changes,
additions, and removals every minute.
Incidentally, this is the reason why, when you stop and start a DNS server
it takes a while for network throughput to return to normal depending
on the number of clients in the network.
The DNS cache, local or authoritative, is refreshed every so often, and is
expired every so often so the addresses in cache for more than a period
of time gets dropped and so the cache will never have the chance to
retain the entire list.
> If one cache entry saves you thousands or even just tens of
> milliseconds, then setting aside some space would give a speed-up of
> at least 3 orders of magnitude.
One can tune up the named to a point. Tuning up as you know
is a compromise; you win some and you lose some and there is
no one-way advantage.
http://www.acay.com.au/~oscarp/disclaimer.html
http://www.acay.com.au/~oscarp
--
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug
-- SLUG - Sydney Linux User's Group - http://slug.org.au/ More Info: http://lists.slug.org.au/listinfo/slug
