Hi.  I hear there's been some interest in my IPv6 DNSBL proposal.  My
goal is that since there are (close enough to) no v6 BLs or WLs yet,
this is the time to switch to a query design that will scale.  The
design I put in RFC 5782 isn't it, unfortunately, nor is anything
similar to it.

We'll have to change our software to handle v6 lookups no matter what,
so I don't see it as a big deal whether it's a small change or a
slightly larger change.

>Thus, we can safely make the assumption that any mailserver is going
>to follow the model of a single host per /64. ...

This strikes me as a poor idea for two reasons: it's probably not
true, and even if it is, it won't help.

The IPv6 address space is big.  Very, very big.  Even if you chop it
in half to /64s, it is still four billion times bigger than the v4
address space.  Bad guys hopping around /64s will blow out your DNS
cache just as badly as hopping around /128s.  And at this point I
would not want to assume there is only one host per /64, or that a
/64 will contain all good hosts or all bad hosts, since there will
doubtless be cases where that's not true.

If you've read my proposal (if you haven't, please stop, visit
http://tools.ietf.org/html/draft-levine-iprangepub-01 and read it,
then come back) you'll see that maintaining a BL/WL is fairly
complicated, but the lookups are quite simple.  Each lookup involves
about five DNS queries, but the design makes it very likely that most
if not all of the answers will already be in the local cache, since
the queries all start from the top of the same tree.

It also ensures that if you do a bunch of lookups to addresses that
are near each other, they'll probably all do the same queries, so
all after the first will be cached.

Another way to look at it is the size of zone: since each DNS record
holds 40 entries, the number of records is no more than 1/40th of the
size of a record-per-entry design.  In the common case that entries
span a range of addresses, the number of entries will be even less, a
lot less.  (Note that rbldnsd synthesizes records on the fly, so that
as far as a client can tell, even if the server knows something is a
/16, the client sees 64K different records.)  And finally, in this
design, the client only looks for records that exist, so there should
be no negative entries in the cache at all.  This tells me that this
would have performed better even for short 32 bit addresses.

I've given it a fair amount of thought, and I think I have gone
through all of the same band-aids everyone else is thinking of, e.g.,
truncate everything to 64 bits, do some sort of probe to find out the
granularity of a range, and they don't work.  When you consider the
length of the addresses, the number of queries, and the cache
behavior, I'm pretty sure this design is vastly better than anything
based on the traditional design, and is not an unreasonable amount of
work for clients.

I don't think it's perfect, and I'd be delighted to get suggestions,
but please don't start by assuming that spammers won't be maximally
hostile, or that managers will always configure their networks the
way you'd prefer.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly

Reply via email to