Re: [dns-operations] rate-limiting state

2014-02-06 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net
 To: Paul Vixie p...@redbarn.org
 Cc: DNS Operations List dns-operations@lists.dns-oarc.net

  For example, if the authoritative provider www.example.com were to
  implement RRL as you describe, then an attacker could spoof traffic
  purporting to be from Google Public DNS, OpenDNS, Comcast ... etc, and
  cause www.example.com to be un-resolvable by users of those resolvers.
 
  no. it just does not work that way.

 O.k., so say I spoof 10M UDP queries per second and 10M TCP SYNs per second
 purporting to be from OpenDNS's IP address. Does RRL  a)  Let the queries
 and SYNs go answered. Or b) Rate limit the responses?

 If it's (a) RRL doesn't prevent the reflection. If it's (b) then you
 complete a denial of service attack against the OpenDNS users.

 Which is it? or what's option (c)?

I think one option (c) (there might be others) is related to what
Paul Vixie meant when he wrote:

]  The more common case will be like DNS RRL, where deep knowledge
]  of the protocol is necessary for a correctly engineered rate-limiting
]  solution applicable to the protocol

in http://queue.acm.org/detail.cfm?id=2578510

I've written too many times here and elsewhere that DNS RRL is not a
naive firewall rate limit.  Simplistic firewall rate limiting against
DNS reflections is little better than blocking all ICMP on security
grounds.  That is why DNS RRL is in the DNS code instead of firewalls.
That's also why there are two R's in RRL.

There are plenty of words in the documentation, technical reports, and
analyses of the various RRL implementations about RRL false positives.
There is disagreement about the best values for the parameters that
minimize RRL false positives, but we who have the least interest in
the topic agree that neither option (a) nor option (b) fit.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] rate-limiting state

2014-02-06 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net

 I chose a fairly typical number, which is actually below average. Arbor's
 data on DDOS puts 10M somewhere between the 40th and 50th percentile.  I'd
 be really surprised if OpenDNS's pipes fill up with that kind of small
 volume.

That seems to assume that the infamous Gbit/sec DNS reflection attacks
involve one or at most a handful of mirrors.  That assumption is wrong.


  so, third, let's look squarely at large enough UDP flow to activate RRL.

 10M requests/sec for www.example.com, type=A. Would that be large enough?

10 Mqps is about 1,000,000 times higher than necessary to trigger DNS
RRL.  I think 5 or 10 qps is an appropriate DNS response rate limit
(although many operators like 50 or even 100).  5, 10, or even 500 qps
is a bad limit if your DNS rate limiting is naive firewall counting
that pays attention only to source addresses. 



   but I don't think that the numbers work out. If
 you're getting an attack of 10M PPS, which is very realistic, you'll end up
 denying service to real users.

In most cases (i.e. not OpenDNS, Google, Comcast, etc.), if you're
getting 10 Mqps, then your DNS server is denying service to real
users regardless of any response rate limiting, because 10M DNS
queries/second is perhaps 4 Gbit/sec as well as a healthy CPU load.
What is the queryperf number of your DNS system over localhost?
(queryperf is a common tool for measuring how many queries your DNS
system can answer.)


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] chrome's 10 character QNAMEs to detect NXDOMAIN rewriting

2013-11-26 Thread Vernon Schryver
 From: Rubens Kuhl rube...@nic.br

 Yeap, in the source code. Some discussions on those:
 http://productforums.google.com/forum/#!topic/chrome/dQ92XhrDjfk
 https://code.google.com/p/chromium/issues/detail?id=47262
 http://serverfault.com/questions/235307/unusual-head-requests-to-nonsense-urls-from-chrome
 http://www.forensicswiki.org/wiki/Google_Chrome#Start-up_DNS_queries

If those diagnoses are correct that the probes are for subdomains of
the local hosts's domain, then I don't understand grounds to fault
Google.  The probes should cause no additional traffic to the roots,
gTLD servers, or anything outside the user's ISP's network.  Whether
the local system thinks it is in something bogus like the computer.
domain or something reasonable, the Google Chrome probes should hit
entries in the ISP's recursive servers's caches.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS Caching issue with ATT and Verizon Wireless (other small carriers too)

2013-11-06 Thread Vernon Schryver
 From: Randy Raitz randy.ra...@readytalk.com

 I'm writing today after a major mistake by register.com rendered our busine=
 ss ReadyTalk nearly useless to our customer base. During a routine request =
 to review our registrar information (removing privacy filters) 

I'm trying to suppress my kneejerk reaction that people who use
spammer shields deserve that kind of trouble and worse.

Note that I did not write spam shields, because GoDaddy's original
target market of privacy shields were spammers irritated by spam
complaints (from spam targets unthinking enough to assume spammers
might do something they'd consider good with spam reports) and too
stupid or lazy to use one of the many undetectable methods for
black-holing complaints to whois contacts.  (e.g. throw-away phone
numbers, postal addresses, and SMTP addresses that work until the start
of an advertising campaign.)

Note also that I and my vanity domain have been around for a while,
its registration has never used any privacy or proxy services,
its registered mailboxes and telephone numbers have always been valid,
the current lefthand sides of the mailboxes show I've found it
necessary to change them very infrequently,
and that some few people claim that I'm hypersensitive to spam.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-26 Thread Vernon Schryver
 of such very weak security claims
are in Haya Shulman's papers.)  The reasonable conclusion to this 
report is
  Deploy DNSSEC because DNS without DNSSEC is insecure.

The paper also contained an explanation of the effect that is
unsupported by measurements, tests, or code reading and simply
wrong.  That explanation is also irrelevant to the point and
reasonable conclusion of the paper.  The right way to fix the paper
is to remove the relevant, unsupported explanation.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-25 Thread Vernon Schryver
 From: Stephane Bortzmeyer bortzme...@nic.fr

  Why would there be extra support calls?  Wrong keys are no worse
  than wrong delegations 

 Of course, they are worse. In the vast majority of cases, lame
 delegations (or other mistakes) do not prevent resolution (as long as
 one name server works). A wrong key can completely prevent resolution,
 leading to a loss of service. The DNS is extremely robust, you have to
 try very hard to break it. With DNSSEC, it's the opposite, you have to
 be very careful for it to work.

Let's agree to somewhat disagree about that.  I've found giving one's
registrar the wrong IP address or glue a lot worse than a stupid
delegation in my own zone files.  DNSSEC needs a more effort than plain
DNS, but that almost none of extra effort has anything to with registrars.
Registrars/registries must sign your DNSSEC RRs, but they must now
also sign your other RRs, so that's no extra work for them.


  Why would registrars get support calls about validation problems?
  Do they get calls now (that they answer) from DNS resolver operators
  (other than big resolvers like Comcast) for lame delegations?

 See above. I cannot visit http://www.онлайн/ while it works from
 $OTHERISP so it's your fault.

I don't understand that.  Of course DNSSEC causes more support calls,
but the calls are to ISPs and IT groups and not to the registrars
trying to sabotage or delay DNSSEC for as long as possible supposedly
because of DNSSEC support calls.

And again, whether they do suffer more support calls *not*, let them
charge extra for adding DNSSEC records!  If they can profit from simple
DNS for US$10/year, then they could profit with DNSSEC for US$30/year.

A rational reason for the registrar DNSSEC sabotage is that the margins
on the PKI certs they flog to the punters are at lot more than US$30/year
(proof: free certs), and DNSSEC+DANE will eventually kill that cash
cow.  Yes, no doubt some registrars are too dumb to see that.


   ...

} From: Stephane Bortzmeyer bortzme...@nic.fr

}  This is not an attack on DNS, but an attack on IP reassembly
}  technology.
}
} Frankly, I do not share this way of seeing things. Since the DNS is,
} by far, the biggest user of UDP and since TCP is already protected by
} PMTUD, I do not think we can say it's not our problem.

How dos PMTUD protect TCP?  When since perhaps 1995 has PMTUD been seen
as protection instead of vulnerability, thanks to goobers with firewalls?

Why can't similar attacks using TCP segment assembly be mounted against
DNS/TCP?  I've heard of more segment assembly attacks than IP fragment
assembly attacks, albeit against TCP applications other than DNS.


}  This might happen even due to malfunctioning network adapter or
}  other network device, not necessarily an attack.
}
} A random modification by a malfunctioning device or an errant cosmic
} ray has a very small probability of being accepted (UDP checksum, DNS
} checks, etc). We are talking here about a deliberate attack, by a
} blind attacker.

(I thought these latest attacks are not blind, but never mind that.)

I've seen more vastly more bit rot undetected by UDP and TCP checksums
(esp. due to my own software and firmware bugs and bugs in green
hardware) than human attacks.  And the number of bad TCP and UDP
checksums reported by `netstat -s` on any even slightly busy host
should be worrisome.

Why does it matter whether the bad bits are natural or man made?
Why not turn on DNSSEC, declare victory, and go home?

Why continually rehash the falling DNS sky?  Aren't there enough other
security issues?  Some that I've heard about are incomparably worse
in consequenes as well as ease of attack (e.g. no hours of 100 Mbit/sec
flooding or per-target packet tuning to forge one measly DNS response).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Vernon Schryver
 things than fuzz DNS caches
with access to the LAN where these tests were done--or most LANS.
(By another I'm referring to the mistaken reports that RRL+SLIP=1
is bad because of non-DNSSEC cache corruption under 4 hour 100 Mbit/sec
floods.)

Instead of looking for yet more obscure ways (e.g. 100 Mbit/sec floods
on LANs) in which non-DNSSEC DNS is insecure, why not enable DNSSEC
and declare victory?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 Please read my first post in this thread, you should find all information
 there.

I see I'm stupid for not seeing that in the first message.  I did search
for 'http' but somehow didn't see the URL.  But why not simply repeat
the URL for people like me?  Why not the URL of the paper at the
beginning instead of a list of papers?
https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf

By searching for DNSSEC with my PDF viewer, I found what I consider
too few references to the effectiveness of DNSSEC against the attacks.
There is nothing about DNSSEC in the abstract, a list of DNSSEC problems
early, and a DNSSEC recommendation in the conclusion that reads to me
like a concession to a referee.  Others will disagree.

After skimming the papers at 
https://sites.google.com/site/hayashulman/publications
since at first I was not sure which one (my fault), I've the
impression that Haya Shulman doesn't like:

 - forwarding to third party resolvers.
I agree so strongly that feels like a straw man.  I think
forwarding to third pary resolvers is an intolerable and 
unnecessary privacy and security hole.  Others disagree.

 - other mistakes
 that I think are even worse than forwarders.

 - DNSSEC
Perhaps that will be denied, but I challenge others to read those
papers with their litanies of DNSSEC issues and get an impression
of DNSSEC other than sow's ear sold as silk.  That was right
for DNSSEC in the past.  Maybe it will be right forever.  I hope
not, but only years will tell.  As far as I can tell from a quick
reading, the DNSSEC issues are valid, but are sometimes backward
looking, perhaps due to publication delays.  For example, default
verifying now in server software and verifying by resolvers such
as 8.8.8.8 should help the verifying situation.


  work on DNSSEC improvements and bug fixes before or after your
  issues? 

 Requiring such answers from me is absolutely out of place, I am
 probably not aware of the constraints that organisations face in their
 every day operation of the Internet, and so I never argued which
 coutermeasures must be deployed and by whom. My goal is to identify
 vulnerabilities and investigate and recommend countermeasures that can
 prevent them. Each organisation should decide what solution suites its
 needs best, based on this and other information that is available to
 it.

That non-answer is absolutely out of place given Haya Shulman's
recommendations. It is unacceptable to preassume enough awareness
of constraints etc. to tell people 'Do this, that, and the other' but
be unwilling to say whether those actions should be done before or
after closely related work.  This is especially true in this mailing
list, because for operators the recommendations are functionally
equivalent to do nothing but wait for new DNS software.


 Port randomization is an extremely thin reed for security, because
  there are so few port number bits.

 There are techniques to artificially inflate ports' distribution, and we
 already described one technique in ESORICS'12 paper.

Would that paper be 
http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16
linked from https://sites.google.com/site/hayashulman/pub ? 
If so, where or how can I find a free version or a summary of the
notion?  Getting more than 16 bits of entropy from a 16 bit value
sounds interesting.  (I trust it's not that literal impossibility.)
I've heard of
  - jumbling domain case, 
 but that suffers from limitations in resolver cache code
 and it's not part of the UDP port number,
  - other fiddling with the payload, but they're not the port number,
  - the ID, but that's not the UDP port number,


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Vernon Schryver
I'm puzzled by the explanation of Socket Overloading in 
https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
I understand it to say that Linux on a 3 GHz CPU receiving 25,000
packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
interrupt code that low level packet buffers overflow.

That puzzles me for reasons that might be summarized by considering
my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
only 40-60% of a 100 MHz CPU.
https://groups.google.com/forum/#!topic/comp.sys.sgi.hardware/S0ZFRpGMPWA
https://www.google.com/search?q=ttcp

Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
The FDDI firmware and driver avoided all interrupts when running at
speed, but I think even cheap modern PCI Ethernet cards have interrupt
bursting. Reasonable network hardware interrupts the host only when
the input queue goes from empty to not empty or the output queue goes
below perhaps half full full, and then only interupts after a delay
equal to perhaps half a minimum sized packet on the medium.  I wouldn't
expect cheap PCI cards to be that reasonable, or have hacks such as
ring buffer with prime number lengths to avoid other interrupts.
Still, ...

IRIX did what I called page flipping and what most call zero copy I/O
for user/kernel-space copying, but modern CPUs are or can be screaming
monsters while copying bytes which should reduce that advantage.  It
would be irrelevant for packets dropped in the driver, but not if the
bottleneck is in user space such as overloaded DNS server.

That old ttcp number was for TCP instead of UDP, which would be an
advantage for modern Linux.

So I would have guessed, without having looked at Linux network
code for many years, that even Linux should be using less than 20%
of a 3 GHz CPU doing not only interrupts but all of UDP/IP.
  100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%

Could the packet losses have been due to the system trying to send
lots of ICMP Port-Unreachables?  I have the confused impression that
Socket Overloading can involve flooding unrelated ports.

How was it confirmed that kernel interrupt handling was the cause
of the packet losses instead of the application (DNS server) getting
swamped and forcing the kernel to drop packets instead of putting
them into the application socket buffer?  Were giant application
socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
(probably a 30 second change for BIND)

25K qps is not a big queryperf number, which is another reason why I
don't understand how only 25K UDP qps could swamp a Linux kernel.  Just
now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
reported 24940 qps without RPZ on a 2-core 2.4 GHz CPU running FreeBSD 9.0.

What about the claims of Gbit/sec transfer speeds with Linux?
https://www.google.com/search?q=linux+gigabit+ethernet+speed

I'm not questioning the reported measurements; they are what they are.
However, if they were due to application overload instead of interrupt
processing, then there might be defenses such as giant socket buffers.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 elevate a convenient issue like raw sockets
above all others.  To the extent that the punters paid attention Steve
Gibson's campaign against Windows XP because of raw sockets, he harmed
them and the Internet in general, because XP was less insecure than
previous versions of Windows.

} Furthermore, if port randomisation is not an issue why standardise
} [RFC6056]? Why set up DNS checkers? If current port randomisation
} algorithms are vulnerable - why not fix?

That's a straw man.  No one has said that ports should not be random.
It was an unfortunate oversight that RFC 1948 did not recommend random
emphemeral ports.  If RFC 1948 had mentioned port numbers it would
have concluded:

   Good sequence numbers AND PORT NUBMERS are not a replacement
   for cryptographic authentication.  At best, they're a palliative
   measure.
 (capitalized words added to actual RFC 1948 text).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net


 Economics also include costs. The operational cost of deploying DNSSEC
 validation on resolvers remains high - there are still frequent key
 rotation and signing errors that cause various DNS subtrees to be
 unresolvable.

On what do you base your claims about the fatal costs of DNSSEC
validation?
I claim relevant knowledge and experience, not just from code I wrote
a few years ago to reduce the costs of DNSSEC on very large resolvers,
but from signing my own domains and enabling validation on all of the
resolvers that I control.  My domains and resolvers are insignificant,
but I hope I would have noticed any fatal costs.

Are you aware that Comcast's resolvers have been validating for some
time?  I think Google is also validating based on a Webmaster, your
web page is not available to your spider messages after a configuration
error in my signing machinery, but am not sure.  Does that conflict
with your claims about the fatal costs of validating?

Yes, I've noticed that Google is still not signing.  Maybe the
continuing hijackings of their ccTLD domains will move them.


 If an attacker can cause the domain to be unresolvable, that seems
 like a weakness.

True, but the right question is not Does DNSSEC add vulnerabilities?
but Overall, is DNS more or less secure with DNSSEC? or Among all
of the things I can do, what will improve the security of my users and
the Internet in general?

Defenders who care about the security of their systems and the Internet
in general don't pick and choose among weaknesses based only on what
is easiest, what can be punted to others, or what contributes to their
reputations.  They don't do as Steve Gibson did and harp on the bogus
catastrophy of Windows XP raw sockets to enhance his reputation and
sell his services.


 Kaminsky wasn't the discoverer of the Kaminsky's bug either, it was
 long known, yet here you credit him. Not that I mean to deny credit to
 Kaminsky, he did a good job of publicising the vulnerability. Just as
 Haya has done here.

I suspect Kaminsky got the credit because he had been contributing to
the field for years.  But who cares who got there first?  Every request
I see for credit is recorded in my private accounting as a debit against
the credibility of the person demanding credit, because credit demands
suggest interests which suggest biases and so inaccuracy.

Yes, I've heard of Kaminsky's business interests, and so I don't
take his announcements at face value.  You should also discount my
credibility based on my pecunary or other interests.  Where you
can't determine my interests, act on your best guess.


 Back before Kaminsky made the need for port-randominsation undeniable
 with an actual working PoC, this sounds like the ISC/Bind response to
 port randomisation attacks. Other implementors and operators made a
 better judgement avoided the problem entirely, taking the cautious
 path. 5 years later, are you really saying we should ignore another
 attack vector?

Who besides you and Haya Shulman has said anything about not randomizing
ports?  What port randomization improvements do you think are needed
in current releases of any major DNS implementation?  Where port
randomization problems exist such as in junk CPE that won't get fixed
before I retire, what contributes most to solutions, selling 
$29.95/#24.95/£19.95 academic papers or turning on DNSSEC?

The issue for me is one of relative priorities.  Among all Internet
security issues that I might touch, which should get my attention
and effort?  By remaining silent about emphasising port randomization
over DNSSEC (or using distant instead of nearby validating resolvers)
would I help or harm?


 The impact even with DNSSEC fully enabled seems concerning enough to
 warrant attention.

Let's agree that ports ought to be as random as TCP ISNs, improve port
randomness where each of us can, and stop implying that anyone thinks
or says otherwise.  Let's also stop the DNSSEC is a problem stuff.

Finally let's consider how you are helping.  Is there anything you can
do to improve port randomization?  If you are committer in any open
or proprietary source trees, will you make any needed port randomization
fixes?  Have you deployed DNSSEC?  What about BCP 38, since cache
poisoning is likely to depend on BCP 38 violations?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 From: Warren Kumari war...@kumari.net

  I suspect they're more interested in getting registry lock in place 
  rather than DNSSEC.

  Most of the attacks against Google have involved changing the name servers 
  completely .. 
  
  Through social engineering and sometimes through directed attacks, yes.

 Sadly yes. 

I trust we all agree that cache attacks with non-random ports,
fragmentation, or padding are irrelevant except perhaps indirectly
through the general (lack of) value of DNSSEC that I claim better
prevents cache attacks than random ports.

Wouldn't DNSSEC have not made things worse and possibly made them
better by:
  - making the social engineering more difficult by forcing the bad
  guys to change key as well as NS RRs
  - possibly making the bogus records fail to validate for a while
 at the start of the attack, thanks what might look like an
 unplanned KSK change.
  - possibly making the bogus records fail to validate sooner and so
 get ignored sooner after the registrar records are restored, again
 thanks to what might look like an unplanned KSK change.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Vernon Schryver
 factors.
 Port randomisation algorithms that were proposed have weaknesses, but
 proper randomisation should solve these problems.

I doubt the implication in that sentense, because I give you more
credit than to think that any of the recommendations in your 19 Oct 2013
mail could and would be deployed overnight or easier than DNSSEC by
any except a very few individuals.  The only recommendation that might
be done quickly is forcing TCP to proxies.  That could be done with a
firewall ACL against UDP, but no one should do that.  The others require
changes to source.  For example, several of your recommedations are
to randomise ports.  Any code that doesn't already do that will
continue not doing it at least until its next release.

Fixing your vulnerabilities would be technically easy for many people,
including those whose resumes are like mine.  However, what is easy
for professional kernel and DNS server hacks is irrelevant.


 I was under pressure to catch a flight when I responded and forgot DNSSEC;
 it is as dear to me as it is to you :-)

I'm sorry, but I think the mention of DNSSEC in your paper exists only
because others forced it.  I'm forced to that belief by various things
including your refusal admit the obvious about relative priorities and
by statements like that sentence above that suggests that fixing port
randomization could be easier than deploying DNSSEC in any except quite
exceptional cases.

You directed Paul Vixie to a copy of your paper on your web site, but
I didn't see a URL.  Is http://www.ec-spride.de/ your web site and if
so, is your paper there?  I can't find any other likely URLs.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

  Should the people working on DNS implementations prioritize making
  their DNSSEC code more robust and easier to use above or below
  addressing your issues?
 
 I'd say below.
 
 Resolver operators (hopefully) want to protect their caches. DNSSEC =
 will do that, but only if people are signing their zones. There are lots =
 of external parties (e.g., registries, registrars, software developers, =
 resolver operators, etc) to get DNSSEC deployed and there remains very =
 little incentive for anyone to sign their zones, regardless of how =
 robust and easy it might be made.
 
 The alternative would be to disregard current and future cache poisoning =
 attacks. Pragmatically speaking, I personally think it highly =
 questionable to ignore cache poisoning vulnerabilities because something =
 which isn't yet deployed to 10% of the Internet will fix it.
 
 This would be a bit like saying don't deploy RRL because BCP38 is the =
 correct answer to the problem.

On the contrary, anyone who spends even one minute on RRL that
could be spent on BCP 38 should be...well, I can't say shot
because I oppose capital punishment.  RRL should be considered
only after everything possible has been done for BCP 38.

Similarly, only after there is nothing that you can do improve your
DNSSEC implementation should you consider improving your port
randomization.  I agree that port randomization should come before
a lot of other things, although that's not saying much because the
major DNS implementations are filled with things I would have vetoed
if I'd been king.

I think their work showing the weaknesses of port randomization in
theory and practice is important, because it shows that no security
should depend on adversaries being unable to inject packets into UDP
or TCP streams because ports are secret.  I strongly disagree with
Haya Shulman's words to Paul Vixie that seemed to say that their work
might fix other applications and protocols.  I think their work shows
that port randomization is like RRL, a lame kludge of a mess that is
better than nothing but not even a distant second choice to actually
fixing the problem.

I say only consider improving port randomization, because nothing
should be added to anything or even changed without clear and
significant benefits, especially in security related areas.  You've
been around long enough to remember many added nice features
caused big security problems.


Vernon Schryverv...@rhyolite.com

P.S. I'm licensed by http://ss.vix.su/~vixie/isc-tn-2012-1.txt and 
http://ss.vix.su/~vjs/rrlrpz.html to criticize RRL.

P.P.S. I've often heard Paul say much the same thing about RRL being
a bad idea except compared the alternative of ignoring the consequences
of everyone else's failure to deploy BCP 38.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 We (me and my phd advisor Prof Amir Herzberg) recently found a number of
 new DNS vulnerabilities, which apply to patched and standard DNS resolvers,
 ...

 Recommendations:
 ...

The complete absense of any mention of DNSSEC among those recommendations
(or elsewhere) reads like an implicit claim that DNSSEC would not
help.  Even if that claim was not intended, would it be accurate?

Would DNSSEC make any of recommendations less necessary or perhaps
even moot?  If DNSSEC by itself would be effective against cache
poisoning, then isn't it among the recommendations, especially for
Resolver-behind-Upstream?  Why aren't efforts to protect port
randomization, hide hidden servers and so forth like trying to make
it safe to use .rhosts and /etc/hosts.equiv files by filtering ICMP
dedirects and IP source routing, and strengthening TCP initial sequence
numbers?

It's not that filtering ICMP redirects, etc. are wrong, but I think
today those things are used for availability instead of data integrity
(or authentication and authorization), and small leaks are not
always and everywhere seen as catastrophes.  In fact, haven't ICMP
redirects been reborn as fundamental parts of IPv6?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 IMHO, DNSSEC is simply the natural defense against the attacks, which
 is why I did not explicitly mention it, but I definitely had it in
 mind :-)

In that case, on what should an organization spend time or money
first, on DNSSEC or the recommendations in the mail message?  Would
it be better if each of the recommendations in the mail message
started with something like this?

Deploy DNSSEC, and consider the follow to help protect cached
data not yet protected with DNSSEC.

 Regarding the proxy-behind-upstream: to prevent the attacks DNSSEC has
 to be deployed(and validated) on the proxy. Currently it seems that
 there are proxies that signal support of DNSSEC (via the DO bit), but
 do not validate responses, and validation is typically performed by
 the upstream forwarder.

That sounds like a more significant bug than port obscurity or
randomization.  If it is a bug, which should be addressed first in
that software or those installations, this DNSSEC bug or the
recommendations in the mail message?  It it is a significant DNSSEC
bug, it would be good if a future version of the mail message
mentioned it.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Should medium-sized companies run their own recursive resolver?

2013-10-17 Thread Vernon Schryver
 sizes, etc.. and you're in the right range for 
 4-5+ devices per home at least.  So 500 unique stub resolvers is a large 
 population, but at the same time, it's not enough to justify spending $1k for 
 a server *2 + UPS.  That cost on my friends WISP would wipe some or all 
 profits.  Perhaps he should charge more, but he doesn't have the knowledge or 
 skill to operate things, but does know it might help to have a faster box 
 on-net

That's yet another nonsensical and offensive (because it insults the
reader's intelligence) red herring.  That organization doubtless has
a firewall, often part of a router.  That firewall could easily run a
recursive resolver for those 500 stubs, because DNS requires only a
fraction of the resources needed for firewalling or routing.  The
configuration of that recursive resolver could be entirely automatic,
because the firewall has all of the parameters needed to configure a
closed verifying recursive resolver.

Yes, your Open Resolver project might have found that many borken open
DNS forwarders in CPE routers and bridges answer WAN requests.  The
fault for that lies with your big ISPs with those big, creative DNS
resolvers, because they chose the incompetent, below cost CPE vendors,
and shipped and supposedly continue to support and maintain that junk
CPE.  A dispassionate observer might see that as a compelling argument
against using more products from your big ISPs, such as their resolvers.


  I am offended on behalf of those hypothetical IT professionals by your
  persistent infantilizing them.  Attitudes like yours in ISPs are why
  there there is so little BCP38 compliance and so many open resolvers.

 Ha!  BCP-38 is actually *hard*.  Customers don't know their IP
 ranges or domains they own/operate.

That's yet another irritating and offensive red herring.  As you
know, BCP 38 is done in the vicinity of routers.  Customers who
don't know their IP address blocks aren't running their own routers
and aren't responsible for BCP 38.  Even when customers are configuring
their own BGP, the primary responsibility for BCP 38 is upstream at the ISP.


  If ISPs would refuse to route packets to customers that can't comply
  with BCP38 or that run unnecessary open resolvers or open resolvers
  unprotected by rate limiting, then a lot of problems would go away.

 I encourage you to start an ISP and report back your results :)

 I can either forward packets or filter them in many cases.  The damaging 
 packets, even when there are large attacks are still such a small percentage 
 of the overall traffic that I can't sacrifice them for the sake of others.  
 You've seen other providers back away from BCP-38 over the years.  I continue 
 to strive in this direction and I resent the fact that you think we're all 
 not trying to do it.

So an ISP expectations of riches justifies taxing the rest of the
Internet to deal with the ISPs abusive customers?
For years, many ISPs and their employess said the same things about
their spamming customers.  Those that didn't respond to spam reports
with Just hit delete and stop bothering me sonny demanded that I
dontate my time to them (ISPs) by making spam reports.
Both tactics were and were intended as substitutes for dealing with
spamming customers.  ISPs whined about losing customers should they
outlaw outgoing spam.  Some such as MCI/UUNET spun lies about the
technical impossibility of counting outgoing TCP SYNs to port 25 as
reasons why MCI/UUNET could not find spam friendly resellers.

Today the spam problem is largely solved, partly by better filters,
but also by ISPs discovering that ignoring spam is unprofitable.


 If you think you can build a better router that can do it right,
 please do.  The market needs it, but that's OT for this list.

That's yet another irritating and offensive red herring, because I
said nothing about building better routers and because better routers
are not needed for BCP 38.  What's needed is equivalent of the spam
solution, de-peering ISPs that can't be bothered.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Should medium-sized companies run their own recursive resolver?

2013-10-17 Thread Vernon Schryver
 From: Carlos M. Martinez carlosm3...@gmail.com

  Also, customer CPE equipment is poor and ...

 Agreed. CPEs cannot be trusted.

That fact is a poor argument for trusting the recursive resolvers
of the organizations responsible for that worse than junk CPE.  Most
of that worse than trash CPE is specified, tested, provisioned, and
maintained by the same outfits.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Should medium-sized companies run their own recursive resolver?

2013-10-16 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

  phones, and other devices behind a NAT router owned by and remotely
  maintained by Comcast.  Instead the question concerned a business with
  2 IT professionals.  Relying on distant DNS servers is negligent and
  grossly incompetent for a professionally run network. 

 As with many things we will have to disagree.

 Not everyone has the same skill set as those on this list, and that curve 
 goes down rather quickly.

I can't help noticing that Jared Mauch noticed and disagreed with my
conclusion about relying on distant DNS servers but overlooked or
ignored the security reasons compelling the conclusion.  He evidently
also overlooked the contradiction or irony in his previous note:

] Everyone else should just use either their ISP (with NXDOMAIN
] rewriting turned off) ...

] Folks like Comcast have large validating resolvers.  Their customers
] should use them.  

despite https://www.google.com/search?q=COMCAST+dns+hijacking

If you check the pages found by that URL, you'll see
  - older reports that Comcast was phasing out DNS hijacking
  - more recent reports of redirection or hijacking of 58/UDP
 packets--not just falsified results from those big Comcast DNS
 servers but packet hijacking
  - far more complication, confusion, and mystification than is
 realistic to expect a two person IT department to resolve.

It's clear that a simple, securite business DNS configuration does
*not* involve a consumer grade ISP.  (I don't mean to criticise any
particular consumer grade ISP.  They are all similar.  I'm not even
sure that DNS result or packet hijacking is a bad thing for consumer
households.)

However, not just tolerating but encouraging people without basic
network and computer competence run Internet businesses is like aviation
before the FAA.  In the first years enthusiasts bought, built, or
borrowed airplanes and went into the barnstorming or airmail businesses.
Then the air industry got government licenses and regulations.  From
Kitty Hawk to the 1926 Air Commerce Act licensing pilots was 23 years.
http://www.faa.gov/about/history/brief_history/

Whether you mark the start of public interest in the Internet with the
1972 CACM articles about the ARPANET (my DOC lab employer read those
papers, got an appropriation, and linked our computers soon after),
CSNET co in the early 1980s when many commercial outfits with got
Internet connections, or a date between, it is more than 23 years later.

I don't like the idea of government Internet licenses, but a two person
IT shop using distant DNS servers, not to mention a consumer grade
ISP, is as culpable as buying an old potato washer to clean your
cantaloupe crop for market.  I'm uncomfortable with the criminal charges
against the Jensen brothers, but if that's what it takes to get people
learn enough and do it right ...
https://www.google.com/search?q=Jensen+cantaloupe


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Should medium-sized companies run their own recursive resolver?

2013-10-16 Thread Vernon Schryver
 From: Bob Harold rharo...@umich.edu

 I think the problem with a DNS appliance is that it becomes an open DNS
 resolver, unless it is configured to know the subnet(s) used internally,
 and updated every time that changes. I don't think the firewall could
 reasonably be asked to block only recursive DNS traffic, although perhaps
 it could block all inbound DNS requests, except to an internal
 authoritative DNS if you had one. I cannot think of any other simple
 workaround. Users are likely to find some way to turn off the recursion
 limiting anyway, like setting the internal subnet to 0.0.0.0/0, which
 solves their problem of updating it when subnets change, but leaves it
 open to the world.

There is a trivial and easy way to keep a recursive DNS server intended
for an organization with a 2 person IT departement from being open to
the entire Internet.  Set the IP TTL on responses both TCP and UDP to
a small number such as 3 or 5.

There are business reasons to keep a small DNS appliance intended for
a small business with a 2 person IT department from being used by a
big outfit.  You might limit the number of DNS responses per second,
hour, or day, but it might be better instead or also to limit the
number of client IP address.  It would be trivial and easy for a DNS
appliance to require ACLs permitting no more than X IPv4 addresses and
Y IPv6 /64's.  Ship it configured with 10.0.0.0/8 and have it refuse
to accept non-RFC 1918 ACLs with too big a total.

A little monitoring of requests from unexpected IP addresses and some
GUI sugar would make it easier for users to maintain their ACLs than
what I've seen in the DNS, AD, WINS, etc. settings of a Windows box.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Should medium-sized companies run their own recursive resolver?

2013-10-15 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

   ... Mercedes...

 Have you ever driven one?  They are mighty nice :)

 Back in the 90's I would agree everyone should run a DNS server as
 the network wasn't as robust as it is today.

On the contrary, in the relevant sense, the network today is less
robust than it has ever been.  You don't want a commodity luxury
sedan while driving across Syria, Iraq, Afghanistan, or the Gobi Desert
despite the fact that many roads in Europe and N.America are more
robust than they've ever been.  Where roads are bad or non-existent
or where there are significantly security hazards, you need something
with more armor, ground clearance, spare fuel, water, emergency supplies,
or even guns than are economical or safest elsewhere.

 Some folks may need local elements (e.g.: MS DNS/AD, but these should not be 
 exposed to the internet...

 Everyone else should just use either their ISP (with NXDOMAIN rewriting 
 turned off) or someone like OpenDNS that can help enforce some security 
 policies and practices with a few knobs being turned at most.

 Folks like Comcast have large validating resolvers.  Their customers should 
 use them.  Folks here are surely going to do the right thing the majority of 
 the time.  The vast majority of others are going to set things up once and it 
 *will* be left to rot.  This isn't intentional, but it naturally happens.

The question had nothing to do about J. Sixpack with 37 televisions,
phones, and other devices behind a NAT router owned by and remotely
maintained by Comcast.  Instead the question concerned a business with
2 IT professionals.  Relying on distant DNS servers is negligent and
grossly incompetent for a professionally run network.  When the DNS
servers in question are to known lie, it should be as much a crime as
failing to wash your cantaloupes in Clorox.
https://www.google.com/search?q=COMCAST+dns+hijacking
https://www.google.com/search?q=jensen+farms+criminal
The same applies when there are Great or small firewalls between the
DNS client and distant validating recursive resolvers.

Even Joe and Joan Sixpack should, if they can, think carefully about
relying on distant DNS servers.  If you wouldn't give your ISP your
bank passwords, then you shouldn't rely on your ISP to validate your
RRs.  Those who control your RRs can get your passwords, albeit with
varying effort.

Should Joe and Joan rely on government approved DNS servers while they
are in China, Iran, or Syria?

Never mind that if the U.S. NSA, FBI, CIA, etc. are competent, they've
used DNS creatively such as to install software on the computers of
their targets or deploy MX RRs to monitor email.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Microsoft

2013-09-29 Thread Vernon Schryver
 From: Jim Popovitch jim...@gmail.com

  Ha!  I removed one ~6 months ago and since then I've been 550
  rejecting the reports... yet they still come in.

 Oh wow.  It was more than 9 months ago (_dmarc.spammers.dontlike.us
 was removed on 15-Jan-2013).

I saw something similar from Microsoft while playing with DMARC.
Microsoft never forgot _dmarc records I simply deleted.  However,
publishing records with reporting or checking explicitly turned off
were eventually effective.  I think it might have taken a week for the
reports to stop.

This might sound like a bug or problem with DMARC at Microsoft, but
it might be a feature implied by the same design requirements the cause
DMARC to apply SPF DNS records more broadly than RFC 4408 allows.  For
example contributions to this mailing list from domains using DMARC+SPF
with rejection will not be seen at free Google or Microsoft mailboxes,
because the SMTP envelope Mail_From value will not be in even relaxed
alignment with the From: field in the forwarded contributions.  It
might even be a feature instead of a bug that according to my tests,
if you configure Microsoft mailbox to foward to Gmail mailbox or vice
versa, the forwarded mail will not be delivered.

Contrary to what one might guess from
https://tools.ietf.org/html/draft-kucherawy-dmarc-base-01
https://en.wikipedia.org/wiki/DMARC and http://www.dmarc.org/overview.html
DMARC seems intended to improve communications between large scale
mailbox providers such as Microsoft and Google and bulk mail senders.
DMARC tells bulk mail advertisers such as American Greetings and
Linkedin about inbox placement.  It tells bulk mail senders might
prefer their bulk mail not be forwared such as Fidelity Investments
and JPMorganChase.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-23 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

 Exactly so.  However pragmatically speaking if someone (say NASA =
 perhaps?) screws up signing their zone, it isn't the =
 zone-signing-screwer-upper that gets the phone calls, it is the eyeball =
 networks that are doing the validation.  Without NTA, the eyeball =
 network operators have a choice, eat the cost of those calls or turn off =
 validation _for ALL signed zones until the zone-signing-screwer-upper =
 fixes their problem_.

 I gather you believe eating the cost is the right answer. =20

YES!  Eyeball networks are paid by their customers to act as
pre-front-line support for bad DNS delegations, broken HTTP servers,
and all other content provider problems.

Saying otherwise for any of the services sold by eyeball networks is
another step down the slope toward content providers paying eyeball
networks for eyeballs and the conversion of the Internet into what it
was in about 1965 when it was owned by Ma Bell and the three television
networks.

Of course, it wasn't called the Internet, but it was the contemporary
equivalent.  I was around for the Carterphone decision and the incredible
freedom to connect computers that followed soon after (in about 15
years--remember DAAs?).  I was also around to see the ARPANET use
56kbps leased lines that were not only incredibly slow but required
incredible massaging of Ma Bell bureaucrats who required you to admit
who was in really charge of your business.  (I was at TIP-25 at DOCB)



} From: David Conrad d...@virtualized.org

} Vernon,

} If the only solution to someone else screwing up signing is to turn off =
} validation for all zones and the likelihood of someone screwing up =
} signing scales with the number of folks signing, why bother ever turning =
} validation on?

Eyeball networks would be best served by turning off DNSSEC.  Comcast
not withstanding, DNSSEC does nothing to help their bottom lines.

Let's be honest and admit that talk about NTA today and tommorow (as
opposed to last year) is really a statement of regret about DNSSEC and
a demand that DNSSEC just go away.  If you honestly believe in DNSSEC's
promise of letting me sign my zones, then you must also let me mess
them up.  Essentially none who will use NTA will have any inkling
whether bad signatures on my zones reflect my incompetence or actions
of my (and or their) enemies.

Many of us here now can and are happy to make good guesses about whether
a DNSSEC failure is due to zone operator error or enemy action, but
that won't be true of most future NTA users, including big outfits.
I read the thinness of http://dns.comcast.net/ as saying that Comcast,
that major NTA supporter, has not only given up trying to diagnose
other people's DNSSEC problems but quietly shelved NTA.


}  On the contrary, NTA is a new tool for deliberately introducing new
}  faults in the data you give your DNS clients.

} True.  This is why I suspect corporate types will have hesitancy to use =
} NTAs and wish to remove them as soon as possible.

On the contrary, given minimal cover such as an RFC, corporate types
at eyeball networks will mandate add-only NTA lists that only grow and
never lose entries.  They'll say politically correct things about
DNSSEC but use NTA to minimize support costs and maximize profits from
activies that are incompatible with DNSSEC such as typosquatting.


Vernon Schryverv...@rhyolite.com


___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-23 Thread Vernon Schryver
 From: Evan Hunt e...@isc.org

  On the contrary, given minimal cover such as an RFC, corporate types
  at eyeball networks will mandate add-only NTA lists that only grow and
  never lose entries.

 Obviously that's possible, but IIRC the draft requires that NTA entries
 have limited (and short) lifetimes.

HAH!  If RFCs were Law, then the DNSSEC RFCs would have long
since answered any question about NTA as ABSOLUTE NEVER!
In the real world, RFCs are no more or less than hints on what
to do to minimize complaints and sanctified excuses for doing
what you want to do anyway.


 If we decide to implement this in BIND (it's on our roadmap, but with a
 question mark), I expect the NTA lifetime will default to an hour and be
 capped at a day.  NTAs would be inserted via the control channel (rndc)
 rather than a configuration file change, and wouldn't persist across
 system restarts.  An operator could write a script to continually
 insert the same NTA's over and over again forever, but it would be
 easier to allow them to lapse as intended.

I agree that's not nearly as evil as NTAs in a configuration file,

or a cron script that runs every 30 minutes and does a few 100K
`rndc nta` commands to fix that problem that someone reported
year before last in the .gov signatures,
and protect the advertising revenue from those typosquatted domains.


 I was against NTAs when they were first proposed; I've come around.
 Disabling validation because of signing failures is the wrong thing
 to do, but people are going to do the wrong thing whether I like
 it or not, and if we must choose between evils, I prefer rndc
 validation off nasa.gov to rndc validation off.

On the contrary, in the real world this year, the people using
`rndc nta` will decide after the 42th time in 48 hours of renewing
the protection for the .gov problem
(not counting the 6 renewals that should have been done between
01:00 and 03:00 when the people empowered to use `rndc nta` were asleep)
to either `echo rndc nta nasa nta-cron-script` or 
`rndc validation off`.

Next year, those empowered peole will be tired of diagnosing DNSSEC
problems and arguing with their bosses about value of DNSSEC.
They'll give second-line support a button to push that does
`echo rndc nta $1 nta-cron-script`.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-23 Thread Vernon Schryver
 From: Evan Hunt e...@isc.org

 it or not, and if we must choose between evils, I prefer rndc
 validation off nasa.gov to rndc validation off.

 ...

} A document that advised limits on the use of NTAs -- for example, the
} recommendation in Jason's draft that they not persist for more than
} a day -- would be okay by me.

On second thought,

Consider the situations of resolver operators confronted with a
situation where you might use `rndc nta`.  Almost all of them will
(and even now most) lack the expertise, time, inclination to
figure out which domain to hit with `rnd nta sub.dom.example.com`.
They'll only know (or hope) that the irate phone calls from principals
about broken lesson plans are related to DNSSEC problems.

They would be better served by `rndc validation off X hours` with 
a limit on the X hours of 24 than any sort of NTA hook.

If you don't let them to use `rndc validation off X hours`, most will
use `rndc nta gov` because their users will be shouting about governement
web site problems and they won't have the time, inclination, or
permission to discover that it's only the apod.nasa.gov.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-23 Thread Vernon Schryver
 From: wbr...@e1b.org

  and protect the advertising revenue from those typosquatted domains.

 Why would a typosquatted domain fail DNSSEC?  When DNSSEC is universal and 
 easy to do, it will  be signed from the TLD on down, just like every other 
 domain.

I wasn't talking about the typosquatters who give the registrar and
registry their cuts.

I meant the typosquatters who (at first) replace only NXDOMAINs
(and later decide that what's good for the legitimate typosquatters
is good for them).

I assume we all remember the NXDOMAIN typeosquatting kerfuffles.
Are any (potential) eyeball networks (in hotels for example) squatting
on NXDOMAINs today?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-23 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

  They would be better served by `rndc validation off X hours` with=20
  a limit on the X hours of 24 than any sort of NTA hook.

 So, because one zone messes up signing, instead of opening up that one =
 zone to spoofing attack you think it is better the resolver operator =
 opens up all zones to spoofing attack?

 This seems wrong to me.

It's wrong only if you accept the false choice between validation off
and a targeted NTA.  We're talking about *resolver* server operators,
not authority operators or IETF participants.

Big resolver server operators not selling resolution will not bother
figuring things out.  They'll ignore complaints, send users chasing
whois phone numbers, or turn off DNSSEC.  They don't have time or
permission to diagnose other people's DNSSEC problems enough to use
NTA correctly.  See the Comcast web page for proof of that.

The resolver servers selling resolutions will use NTA correctly,
but they already have NTA and don't care about opinions from peanut
galleries including the IETF.

The majority of resolver server operators will not use NTA more
than a half a dozen times.  Then they'll treat DNSSEC errors
like bad delegations or use one form or another of validation off
including NTA as close to the root as they can go.  The best bet to
keep them from a static validation off is an automatically
sunsetting form.


 I'd suggest that in the BCP/RFC/whatever, in addition to recommending =
 that NTAs be time capped and not written to permanent storage, it should =
 also recommend NTAs be written as specifically as possible.

Yes, that transient NTA a good idea I'd not heard/noticed/understood
until today, but it does not redeem NTA.  

I can't believe you're seriously suggesting that words in any IETF
document telling people to use narrow NTAs would have any effect
on resolver operators.

Practically no one who might use any NTA hook will understand or
(be allowed to) care enough to figure out to hit cnn.co.uk instead
of cnn.com.  Of necessity they'll just keep hitting the NTA button
with semi-random domains until the calls stop.  The wise ones will
go straight as high as they can, functionally to validation off.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-22 Thread Vernon Schryver
 From: wbr...@e1b.org

 Running the DNS for 100+ school districts and 400,000+ devices, I really, 
 REALLY don't want to be the one saying Sorry, you can't use the site 
 called for in your lesson plan today because they messed up the DNSSEC 
 records.  Management's response would be Just make it work!

 Without a per domain NTA, the only option would be to turn off DNSSEC, 
 returning to square one.

You don't do crazy things like poke around to get an old copy of
their zone and publish a pirate copy when they mess up something
else.  You say something like They messed up.
In this case, you could and should say something like:
  Our network security defenses are telling us that there is
  something.  wrong there.  Instead of lesson plans, you might be
  getting child porn if you visit their pages today.



 Our browsers give us the option to trust invalid TLS certificates, some 
 even storing it indefinitely.  Is an NTA much different?

Yes, because TLS differs because public PKI certs are merely a
charade of pretend security intended to fool the rubes and harvest
money from those cannot for various good and bad reasons refuse to
pay the commerical PKI cert vendors.  (Yes, some commercial PKI
certs are free, which says all that needs to be said to anyone with
0.1% of a clue about the security of every commercial PKI cert.)
A valid commercial PKI cert tells you *NOTHING* about the web data
it purports to guarantee except that some was willing to pay time,
effort, and perhaps some money to appear trustworthy.

Perhaps in the real world, no evil nasty hackers are going to replace
your staff's educational pages with nastiness with either bogus
certs or corrupt DNS, but things are definitely otherwise elsewhere.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Implementation of negative trust anchors?

2013-08-22 Thread Vernon Schryver
 From: Suzanne Woolf wo...@isc.org

 I don't like it either, but it limits the damage done by a DNSSEC =
 failure to status quo ante rather than something worse. 

That is mistaken.  You get the status quo ante by simply turning
off validation.
Turn off validation is the only sane response this year to phone calls
reporting the breakage of a major domain.  Even if you have NTA, from
now on you'll do as Comcast evidently is now doing and decline to pay
the current and future costs of adding minor domains to your NTA list.
You'll just tell your users Stuff Happens and perhaps help them use
`whois` to find someone else to bother.  Last year differed.

I trust (wish?) we all learned the excessive costs of organization-wide
white/blacklists from the last 15 years of the spam wars.


  madness test: would we have bothered with DNSSEC at all, back in the =
 day, if NTA had been known as a definite requirement?

 I realize this is something of a rhetorical question, but I'll bite: if =
 it were framed as a way of promoting incremental, fault-tolerant =
 deployment and mitigating the cost shifting of I screw up and your =
 phone rings, some of us might well have been happy to include it.=20

On the contrary, NTA is a new tool for deliberately introducing new
faults in the data you give your DNS clients.  It is a tool for lying
to your DNS clients with data that you swear is valid and signed but
that you know is at best unsigned and quite possibly invalid or worse.


If I didn't know that the inevitable user response to security
problems, I'd favor NTA as a way to get validation move where must
be eventually, at least as close as their nearest router.  After a
few kerfuffles in which it is discovered that telephants have been
ordered by government or corporate bosses to use NTA to obscure the
hijacking of domain names on grounds of copyright violation,
terrorism, publication of national defense secrets, or failure by
content providers to agree to telephant tariffs, one might hope
that users would stop using Central Facility's DNS validators.

Of course, besides the inevitable non-response by almost users,
some users would probably notice, figure it out, and care.
But as always, enough of the bosses and their minions won't
believe or care.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Geoff Huston on DNS-over-TCP-only study.

2013-08-21 Thread Vernon Schryver
 From: Geoff Huston g...@apnic.net

 On the other hand its no more serious than any other form of small
 TCP transaction based services that are subjected to massive volumes,
 such as, say, a search engine front end.

Isn't that why HTTP, SMTP, and other TCP transaction services have
been changed to reduce the ratio of TCP connections to transactions?

Isn't it also true that DNS transactions are much lighter weight than
HTTP, SMTP, ando other TCP transaction applications?  Could the gTLD
roots exist in anything like their current forms if DNS transactions
cost as many CPU and stable storage computrons as an HTTP GET of
a purely static page (even without TLS)?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNSResolvers.com will be shutdown

2013-06-28 Thread Vernon Schryver
 From: Feng He fen...@nsbeta.info

  http://blog.easydns.org/2013/06/27/dnsresolvers-open-resolvers-will-be-shut-down/

  The DNSResolvers.com free and open public resolvers will be shut down,

 Sorry to hear that but we have met the same DDoS problem days ago so we 
 have to stop the free DNS hosting.

Just for my curiosity and not to suggest changing any plans,
were those attacks intended to hurt the open resolvers themselves
or were they reflection attacks?

In either case, was RRL tried?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Clear DNS cache

2013-06-20 Thread Vernon Schryver
  ..It seems your nameservers don't agree on the SOA serial number!... 

I wouldn't put too much stock in what http://viewdns.info/ says
about anything, and not just because what how third parties digest
your RRs is not dispositives or because historically the web DNS
digesters have always spread a lot of bogus fear, uncertainty,
doubt, and misinformation.  All that really matters is what `dig`,
`nslookup`, other tools, and recursive and stubb resolvers say.

They're badly confused about the DNS RRs for rhyolite.com.  Never
mind what I suspect are their glue confusions, perhaps due to IPv6
or perhaps due to my using well distributed secondaries.  
Besides your nameservers don't agree on the SOA serial number
they also say this about my SOA:

Your Start of Authority (SOA) record is:

Primary nameserver: 5
Hostmaster E-mail address: 2
Serial number: 28800
Refresh: 20130815213614
Retry: 20130616213614
Expire: 26805
Minimum TTL: rhyolite.com.

and then hector me about the implications of that silly nonsense.

This is what an old version of `dig +dnssec` on someone's 
system (not mine) says:

rhyolite.com.   27587   IN  SOA ns.rhyolite.com. 
named-mgr.rhyolite.com. 1371422174 3600 900 2592000 7200
rhyolite.com.   27587   IN  RRSIG   SOA 5 2 28800 
20130815213614 20130616213614 26805 rhyolite.com. 
uTprgMR4QbNDzyBKCgDUINT1ToLVnSvB9UZ3IOoNofQmx9kQ5u8toMj+ 
aEX+MN7cUJqyXvYqrG3f4jf9ezfXEaOUkaMVGYitXK+FfA80jOGL2d9s 
EPSGjFrPu47mcy8hbkz9PAYtMY1wG/4iIpy/kJLXB/sRMfkdwtA7NKst s0M=

Notice the 20130815213614 in the RRSIG.  I think an exegesis of RRs
by code written by someone who didn't reflexively deal getting unexpected
RRs from strange DNS servers should not be interesting to anyone,
and especially not when the extra RR is standard and only included when
you explicitly ask for it with the flag bit.


They also say:

Your Mail eXchanger (MX) records are:

5 2 [TTL=IN]

and they point out the various crazinesses of that.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Best Practices

2013-06-16 Thread Vernon Schryver
 From: Florian Weimer f...@deneb.enyo.de

  a) Secure configuration guidelines (RRL you can't make part
  of that, because it requires too much tuning IMHO).

I think RRL is young for published, official secure configuration guidelines.


  rrl's defaults work fine on every authority server i've tried.

 That's probably because those servers don't see traffic from resolvers
 which in turn have clients that send queries which are a little bit
 creative.

What is the nature of the troublesome traffic or tuning on authority
servers that you've seen or heard about with settings as close to
defaults as you get without leaving RRL turned off?

rate-limit { responses-per-second X; };

probably for X=2 and =20,
and perhaps X=15 as suggested on http://www.redbarn.org/dns/ratelimits

Would the too much tuning problem be fixed by adding
rate-limit { responses-per-second 15; };
to the example in the BIND9 ARM text?  


 ISC-TN-2012-1 is unfortunately not very clear about the actual key
 used to determine the bucket to account against.

What is the relevance of the shortcomings of
http://ss.vix.su/~vixie/isc-tn-2012-1.txt to whether RRL works on
authority (or even recursive**) servers without too much tuning?


   Section 2.2.1 claims
 that many possible questions can yield the same answer and suggests
 that the rate limit applies to those same answers (which apparently
 do not include the transaction ID or question section), but section
 3.1 talks about the QNAME.

It wouldn't make sense to rate limit based on transaction IDs,
because they're supposed to be functionally unique.
The R in RRL stands for response, and so rate limits should ignore
the question section as much as possible.
For non-empty, non-error, non-wildcard generated, non-referral
responses, the key is {class,qname,qtype,client IP block}.
Section 2.2.1 is about the special cases where answers are too similar.
The rate limit for NXDOMAIN responsess is applied to the domain
from the SOA, because response rate limiting would not be a useful
DNS reflection attack mitigation mechanism if it treated the identical
responses to the practically infinite number of different
random.example.com questions the same.

Is there a specific question about the key in BIND9 RRL?


** I continue to be surprised and disappointed by people who spin
RRL is not recommended for recursives into
RRL-doesn't-work-especially-on-recursives and then flog that FUD.
Not recommend differs from doesn't work, denies all service,
is a security hole, or breaks the intertubes.
RRL on recursives could in theory slow down applications that repeat
requests a lot, but I do not recall hearing of even one case where
end users noticed or complained.
Recursive servers should generally not need RRL, because they shouldn't
be open and so needn't worry about reflection DDoS attacks.
Not recommending RRL on recursive servers is like not prescribing
statins for people without high cholesterol levels.

Besides, open recursives must have some kind rate limiting,
as people who run professional open recursive servers say.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Best Practices

2013-06-16 Thread Vernon Schryver
 none of the retransmissions, because
 the first burst of 1000 drove the count negative and we're
 still in the window.  It will slip half and drop other half.
 5. repeat from #3

For 30 concurrently, simplistically rendered IMG tags, there
will be 1 timeout.  If your web browser limits itself to fewer than
15 concurrent connections to any single HTTP server, it probably
won't be affected at all.

For 1000 concurrent spam, wouldn't it be wiser to rate limit your spammers?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] DNSCrypt.

2013-05-31 Thread Vernon Schryver
 From: Ken A k...@pacific.net

 What is keeping nameserver vendors from building this into servers?

  http://www.opendns.com/technology/dnscrypt/
  http://dnscrypt.org/
  https://github.com/Cofyc/dnscrypt-wrapper

Preventing men in the middle raises key distribution questions,
so I went looking for answers.
https://github.com/jedisct1/dnscrypt-proxy/blob/master/TECHNOTES
says
The following information has to be provided to the proxy:
- The provider name (defaults to 2.dnscrypt-cert.opendns.com.)
- The provider public key (defaults to the current one for OpenDNS).

...

The proxy doesn't cache replies. Neither does it perform any DNSSEC
validation yet.
This is better handled by a separate process or by linking libunbound.

That makes the proxy+stub resolver and all of its implicit practical
problems sound a lot like a stub resolver with a copy of the root
DNSSEC key doing its own DNSSEC validation.

I see commercial advantages in this mechanism to OpenDNS in locking
in customers.  (I intend no offense to OpenDNS.  Every business
does and should look for ways to retain customers.)
I don't understand why DNSCrypt is better than stub resolvers that
do DNSSEC validation.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNSCrypt.

2013-05-31 Thread Vernon Schryver
 Yes, except that DNS-over-TCP helps reduce the risk of MITM, which
 is a perceived channel-validation benefit of DNSSEC.

How does DNS/TCP reduce MITM risks enough to talk about?  How is
DNS/TCP a problem for governments and other bad actors?  25 years
ago I naively assumed that transparent and translucent proxies
for popular TCP based protocols were not practical at scale.  Then
AOL started proxying port 25 and now everyone has man in the middle
proxies for all kinds of TCP applications including some that are
ostensibly protected with TLS.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] bind-9.9.3rc2 ANY+TCP patch

2013-05-16 Thread Vernon Schryver
 From: Matthijs Mekking matth...@nlnetlabs.nl

  https://indico.dns-oarc.net/indico/getFile.py/access?contribId=4resId=0materialId=slidesconfId=0
 
  Page #12

  I also wonder about the definition of false positive.  There are many
  plausible candidates.

 I agree. Basically it is a query from an attacker that is not being 
 dropped.

That sounds like a false negative instead of a false positive.
A false positive would be dropping or slipping a legitimate or
non-attack query.
A true positive is correctly identifying and dealing with an
attack packet.
A true negative is correctly identifying and not harming a
non-attack packet.
https://www.google.com/search?q=false+positive
http://www.mathsisfun.com/data/probability-false-negatives-positives.html
https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

Perhaps Slip 1 on page #12 refers to the RRL parameter.  If I assume
False positives means truncated responses (which are true positives
instead of false positives), then all of the table except the TCP
responses column makes sense.  I have no idea what TCP response
column means.


  I know it has more to it than that. It might be a good idea to 
 define the term in the technical note. I can write some initial text, if 
 that is appreciated.

I would a appreciate a few words here.


I don't understand the graphs and tables, but I agree with the
conclusions on page #20.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] bind-9.9.3rc2 ANY+TCP patch

2013-05-16 Thread Vernon Schryver
 From: Matthijs Mekking matth...@nlnetlabs.nl

  https://www.google.com/search?q=false+positive
  http://www.mathsisfun.com/data/probability-false-negatives-positives.html
  https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

 So a false positive is a type I error, aka the incorrect rejection of a 
 true. 

True is an adjective instead of a noun in this context.  The nouns
in this context are postive and negative.

Putting that back in RRL perspective, I translate that to a false 
 positive is the failure to identify and deal with an attack packet 
 (like above).

That is mistaken.  We are talking about testing for (and perhaps
mitigating) attack packets.  A positive for our test is this packet
is an attack packet.   Deciding that a packet is not an attack packet
is a negative.   An accurate test or determination that a packet is
or is not an attack packet is a true positive or true negative.
An inaccurate determination by the test is a false positive or false
negative.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] bind-9.9.3rc2 ANY+TCP patch

2013-05-15 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

 I thought I'd share this to anyone that wants to just force all TYPE=ANY 
 queries over TCP to prevent those from coming from spoofed locations.

 This is a crude but effective hack.  It doesn't stop the system from 
 recursing to find the response.

 http://puck.nether.net/~jared/bind-9.9.3rc2-tcp-any.patch


I can understand simplistic DNS reflection mitigation in firewalls,
especially when response rate limiting is not available in the DNS
server implementation or when local policies forbid the use of patches.
I don't understand why would one use a patch like that with its
limitations and drawbacks (e.g. usable only on recent versions of
BIND9, affects only ANY, affects all ANY, doesn't limit the flood of
reflected truncated responses during attacks, no whitelisting for local
clients, not view-specific) instead of the full blown RRL patch for
9.9.3rc2, 9.9.2, 9.9.2-P1, 9.9.2-P2, 9.8.4-P2, 9.8.4-P1, or 9.8.5rc2.


By the way, why use qtype == 255 instead of qtype == dns_rdatatype_any ? 

Why #define TCP_CLIENT() and use the macro exactly once instead
something like
if (qtype == dns_rdatatype_any 
(client-attributes  NS_CLIENTATTR_TCP) != 0) {
If TCP_CLIENT() is used in query.c, then its definition should be moved
from client.c to bin/named/include/named/client.h and the several uses
of client-attributes  NS_CLIENTATTR_TCP in query.c replaced with
TCP_CLIENT().   It's bad form to define macros (or much of anything)
more than once, because you can be sure that eventually the definitions
will differ.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] bind-9.9.3rc2 ANY+TCP patch

2013-05-15 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

 Because of the FP ratio presented at the DNS-OARC meeting this
 past week.  It's suitable on a recursive resolver, where RRL is most effective
 on an authority.

 See 

 https://indico.dns-oarc.net/indico/getFile.py/access?contribId=4resId=0materialId=slidesconfId=0

 Page #12

I wonder to which RRL implemetation those numbers apply?

Please recall that those slides appeaer to be from NLnet Labs and
that one of my concerns with the NLnet Labs RRL implementation is
the possibility of significantly more false positives than what I
hope are the practically none from the BIND9 RRL code.

I also wonder about the definition of false positive.  There are many
plausible candidates.


 This effectively does slip=1 and does away with any amplification and just 
 makes it
 a pure reflection attack.  Still not ideal, but doesn't amplify.

On the contrary, as I just now wrote in the ratelimits mailing list
http://lists.redbarn.org/mailman/listinfo/ratelimits
your patch does not affect amplification by authorities.
For example, if applied to an authority for isc.org, 
`dig +dnssec isc.org any @ams.sns-pb.isc.org'
would still reflect almost 4 KBytes for each 60 byte ANY request.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] bind force qtype=ANY to TCP

2013-05-15 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

 The folks that are most concerned with RRL are those expecting queries
 from stub resolvers, I think this would mitigate this risk.

}  Is it intentional that the patch does not affect authoritative ANY
}  responses?  I think the patch would fail to stop the authorities for
}  isc.org from answering `dig +dnssec isc.org any @ams.sns-pb.isc.org'
}  with almost 4 Kbytes.
}
} It's somewhat accidental, but I think OK.

We disagree on both of those issues.  Reflections from recursive
servers are bad, but reflections from authorities are as bad if only
because many authorities have more resources and so can blast more
bits at a DoS target than many recursives.  There's also the idea
that open recursives should be closed for more reasons than complicity
in reflection DoS attacks but authorities cannot be closed.


}   I think it is fine as it primes the cache if it's a real query, but if it's
} fake then it just keeps sending TC=1 until the TTL expires.

What are fake and real queries?  I didn't think we were talking
about queries that get NXDOMAIN responses or randomexample.com.
There would be no need for any patches if there were a way to
distinguish forged DoS queries from real queries from the DoS target.

That reference to cache priming suggested another thought.
As you wrote, the patch does not stop recursion from filling the
local cache.  The patched code is not used when the local cache
already has the answer, as it will after an initial TC=1 response,
because BIND sort of pretends that it is authoritative for everything
in the cache.  That implies the patch should have no effect after
an initial ANY query and TC=1 response.

I think the patch has a false negative rate of approximately 100%.
To check whether I am wrong again, I set up a test server and tried
two `dig +ignore isc.org any` commands.  The first got a TC=1 error
response as expected.  The second command got 3500 bytes of RRs via
UDP.  I expect (but haven't tested) that all subsequent queries get
normal responses until all of the TTLs expire.


So I recommend that those who want to answer all UDP ANY responses
with TC=1 and don't like my real recommendation of Don't Do That!
use one of the fancy iptables or other firewall rules for doing that.
Or am I wrong again and no one has offered such rules?--if so, use
one of the rules that simply block ANY (which I also don't like).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS Issue

2013-04-26 Thread Vernon Schryver
 From: Jared Mauch ja...@puck.nether.net

 Because someone told them the wrong thing and they don't know any
 difference.  Just because they're an auditor doesn't mean they are
 clued.  Simple thing would be to show them a dns query that requires
 tcp, such as:

Would you show anything to a doctor prescribing bloodletting to cure
what ails you or would you quietly leave?  (except for lab work)

Someone who let a financial auditor with equivalent ignorance about
the fundamentals of bookkeeping near the company's books (not to
mention hiring) would fear being fired or indicted as an accessory.
If your boss or boss' boss' boss etc. hired an equivalent to audit
the company books, you'd infer the worst and start looking for a
new job while the banks are still cashing your paychecks.

The same should apply to network security quacks.  Bogus security
audits or auditors might not signal as much about your paychecks as
bogus financial audits, but they do signal coming security disasters
that probably won't help your career.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Having doubts about BCP38 solving the ORN problem

2013-04-01 Thread Vernon Schryver
 From: Fred Morris m3...@m3047.net

 The premise with regard to BCP38 + open resolvers is that the spoofed
 packets reside on different networks than the resolvers. If these
 resolvers are primarily CPE and other unmaintained equipment, then it
 stands to reason that they reside inside networks containing other
 equipment; and this equipment could be the source of the source-spoofed
 (DNS) packets.

 Reflecting traffic off of an open resolver on one's own network would
 serve to cloak the true identity of the originator.

Other responses have been good about the general issue of where
ingress filtering must be done, but I think that scenario is so
specific that it wants a narrow response.

The rest of the Internet does not care which boxes inside the customer
network send the stream of UDP/53 packets.  It doesn't matter to the
rest of the Internet whether the bad guy reflects them off resolvers
inside the customers network or sends them directly.  On the outside,
it *is* a simple DoS attack without reflection complications.

The customer might care, because floods reflected off internal
resolvers might cause spikes on the resolvers that might be more
noticable than valid packets (at least not violating BCP38 filters)
directly from the corrupt systems to the outside target.

Floods sent directly from the corrupt systems to the distant target
instead of by reflection will have fewer packet losses.  If the bad
guy can forge source addresses, then a Dos attack direct from the
corrupt system can appear to come from the customer's resolver, but
without triggering rate limits or other defenses on the resolver.

Only if the customer has unusual firewall rules limiting *outbound*
UDP/53 to the resolvers and if those rules are not based on more
than IP source address would reflections be better for the bad guy
than sending directly.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external quereis to Open Resolvers?

2013-03-31 Thread Vernon Schryver
 From: Xun Fan xun...@isi.edu

 to discuss here is TCP. Someone says TCP is expensive, but if we could
 afford entirely shutting down external queries, then two more RTTs to get a
 response seems trivial.

  clientserver
1.  DNS request/UDP --
2.  -- DNS response/UDP

 A normal DNS transaction would end here.  Forcing TCP requires the following:

3.  TCP SYN --
4.  -- TCP SYN-ACK
5.  TCP SYN-ACK --
6.   DNS request/TCP--
7.  -- DNS response/TCP
8.  TCP FIN --
9.  -- TCP ACK
10. -- TCP FIN
11. TCP ACK --

(That's what I see with `tcpdump -n -i ZZ0 port 53 and host XXX`
during `dig +vc .com @XXX`   Try it yourself.)

That increase from 2 to 11 packets and from 1 to 5 round trips is not
the only cost.  There is also dealing with the pile of transmission
control blocks (TCBs) for the duration of the time-wait delay, and on
a busy server those costs can be worse.


   . And as a internet measurement researcher, I also find the
 value of open resolvers in some research projects that OR greatly extend
 our view to the Internet. I would like to find a way to solve the problem
 that we are facing now, while preserve the open resolvers for its good side.

Open resolvers are not certainly not justified by the needs
of researchers.


 So do you think force TCP for external queries to OR is a feasible
 solution to DNS reflect amplification problem?

There are several reasons why it is not feasible.  The owners of almost
all of the many millions of open resolvers would be happier if they
were closed.  Almost all open resolvers are unintentionally open and
use of them by outsiders is an objectionable waste bandwidth, CPU
cycles, and other resources.  It would be easier for their owners to
close them than to change them to force DNS/TCP, because in many cases
closing consists of correcting configuration errors or adjusting
firewalls to drop incoming packets addressed to UDP/53 while forcing
TCP requires changing software (or CPE firmware).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external quereis to Open Resolvers?

2013-03-31 Thread Vernon Schryver
 From: Jim Reid j...@rfc1035.com


 I'm not sure it will make a difference though. The bad guys won't
 bother to do TCP for the obvious reason and will stick with their
 current, DNS protocol conformant, behaviour.

The bad guys would not be able to stick with anything.  The idea
is to change all DNS servers to answer all DNS/UDP requests (or
perhaps all outside requests) with truncated (TC=1) responses to
force clients to retry with DNS/TCP.  It might make sense for the
few resolvers whose owners want them to be open (never mind that
most of those owners are mistaken), but it assumes that it is
possible to install new software on 21 million open resolvers that
are open only because they are not properly maintained.


 Remember too that in these DDoS attacks truncated UDP responses
 would still be going to spoofed addresses. So those victims still
 get hit, albeit without the amplification factor of a chubby DNS response.

That amplification is the reason why the bad guys bother.


 I expect TCP to an anycast resolver -- say 8.8.8.8? -- will prove
 tricky for long-lived connections.

Which long-lived DNS/TCP connections are those?  DNS/TCP to 8.8.8.8
in `dig +vc example.com @8.8.8.8` works for me and takes a fraction
of a second.  (`dig` claims Query time: 50 msec, but that evidently
only covers one of the TCP round trips.  `tcpdump` timestamps show a
total of about 150 ms.)


 Keeping state for bazillions of DNS TCP connections to a resolving
 server will present further challenges. 

Yes, that could be a problem on busy DNS servers handling lots
of legitimate traffic.  The costs are not only holding the TCBs
for the fraction of a second of a DNS/TCP transaction but holding
them for the time-wait delay.  See
https://www.google.com/search?q=tcp+time+wait+exhaustion


 [Maybe TCPCT could help.]

I don't see anything in https://tools.ietf.org/html/rfc6013 that reduces
the costs of TCP for DNS.  Perhaps you mean T/TCP to bypass the TCP
3-way handshake.  However, the expensive 3-way handshake in which the
DNS client says Yes, I really did send that DNS request is why DNS/TCP
prevents reflection DoS attacks.  If you bypass the 3-way handshake,
you get the same reflection DoS tool that you have with DNS/UDP.

If you can change the software on 21 million open resolvers
to use DNS over T/TCP, why do the the easier thing of closing them?

If you could change their software and want to keep them open, then
you could install RRL and DNS cookies.  The problems that RRL has on
resolvers would be solved with DNS cookies.  DNS cookies don't need
kernel changes but only changes in DNS client and server software.
https://tools.ietf.org/html/draft-eastlake-dnsext-cookies-03


 Another problem is lots of crapware -- CPE, hotel networks, coffee
 shop wi-fi, etc -- assume DNS is only ever done over UDP.

That invalidates many rationalizations for keeping resolvers open.  In
real life, travelers wanting to use the home office resolver must use
VPNs and so don't need the home office resolver to be open to outsiders.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external quereis to Open Resolvers?

2013-03-31 Thread Vernon Schryver
  Only the DNS people think that. The HTTP people are used to many TCP
  connections to manage and do not think it is impossible.

 So we could abandon DNS/UDP and move exclusively to DNS/TCP?

No one said that it is impossible to handle lots of DNS/TCP connections.

It is a simple, unavoidable fact that TCP is far more expensive than
than UDP not only in bandwidth, latency, and CPU cycles but also memory.
I spent years whacking on network code at a vendor once known for high
network performance to improve HTTP hit numbers as well as UDP and TCP
bandwidth numbers.  There are many things that you can do to speed up
DNS/TCP, but DNS/UDP will always be a *lot* cheaper.  Switching from
DNS/UDP to DNS/TCP requires more memory, CPU cycles, and bandwidth.
That's obviously not impossible, but it's also not free.

If you could change the 21 million open resolvers and for crazy reasons
wanted to keep them open, there are cheaper ways to make them useless
for reflection attacks than the TC=1 hack.  But if you could change
them, you would close them for simple hygene and so not care about
DNS/TCP, T/TCP, DNS cookies, or anything else.

In the real world, the only hopes for fixing the 21 million open
resolvers are 
  - protecting them with BCP 38 (faint)
  - blacklisting them at enough authoritative servers and so forcing
 their owners to wake up and do something (also faint).
  - firewalls at ISPs filtering incoming UDP/53 (I'm not holding my breath,
 since that's similar to the work of BCP 38)
  - scanning for them and nagging their owners with unsolicited bulk
  email or spam (hopeless as demonstrated with SMTP relays)
  - years and years and years and years of patience

 ..


} From: Jim Reid j...@rfc1035.com

} In this case, DDoS attackers would get those truncated responses
} sent to their victims. OK, they lose the amplification factor but
} they still get to flood the victim(s) with unsolicited traffic. If
} that lost payload matters to the attacker, they can just ramp up
} the size of their botnet or the number of reflecting name servers
} to compensate:

Without amplification by reflection DNS servers, the bad guys can
deliver more bits/sec at their targets by sending directly to the
targets.  Bouncing bits off mirrors that don't amplify results
in fewer bits at the targets as some packets are inevitably lost.
What's the profit for the bad guy in spending 10 bps of botnet
bandwidth to reflect 9 bps at the target?

Bad guys that send from a few sources instead of a botnet might hope
to hide behind DNS refelections, but to hit a target with 300 Gbps
they'd need to send more than 300 Gbps from those few sources.  Tools
for detecting and tracing and then terminating such large streams exist
and are being improved.


}  I expect TCP to an anycast resolver -- say 8.8.8.8? -- will prove
}  tricky for long-lived connections.
}  
}  Which long-lived DNS/TCP connections are those?
}
} I was thinking of the use case where an application's resolver
} opens a TCP connection and assumes it stays open until the application
} goes away: eg the resolver in a web browser opening a connection
} to 8.8.8.8 and shoving all its DNS lookups down that until the web
} session ends some hours later.

Let's accept the unsupported assumption there are any lived DNS/TCP
connections in the real Internet.  (AXFR and IXFR are irrelevant here.)
Many things break long lived TCP connections.  If the client software
is not lame, stupid, and written by idiots, it does the obvious,
standard, and trivial.  When write(), send(), sendmsg(), or whatever
reports that the connection died, reasonable TCP client software makes
a new connection.  For HTTP, SMTP, and other applications reuse TCP
connections to save the CPU cycles, bandwidth, and latency of the 3-way
handshake and application authentication, this is not theoretical.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external queries to Open Resolvers?

2013-03-31 Thread Vernon Schryver
 From: Paul Wouters p...@nohats.ca

 Not all open resolvers are run by brainless admins. And I believe
 open resolvers are crucial to the open nature of the internet.

There is a much better case for open SMTP relays, but we all know
how that turned out.

More power to you if you can follow the lead of Google, OpenDNS, and
others in running open resolvers that do not abuse the rest of the
Internet.  However, in real life the history of SMTP relays is being
repeated.  Not only are almost all open resolvers orphans that would
be closed if their owners knew about them, but most intentionally open
resolvers are run by brainless admins with silly delusions of being
competent enough to prevent abuse.
If you (anyone) are running an open resolver and not deluded, then
great!--but just as with SMTP relays, if you are running an open
resolver or relay, you're probably fooling yourself.



] From: Joe Abley jab...@hopcount.ca

] There seems to be an implicit assumption in this thread that when
] we say DNS over TCP, we mean setting up a TCP session and tearing it
] down again once per query.

In practice on the real Internet, that is what will continue to be
so for the forseeable future.  If we could change those 21 million
open resolvers to cache TCP sessions, then we'd also close them and
so not need to pay any of the costs of TCP.


] If instead we imagine persistent pools of TCP connections open between
] stubs and resolvers which are rarely set up or torn down, how is the
] overhead in bandwidth, latency and CPU cycles substantially different
] from UDP?

For the duration of the TCP connection, you use only 3 packets per
request (request, response, and ack unless the ack is piggybacked
on the next request) and so only 50% more bandwidth than UDP.

However, even if you don't think 50% more bandwidth and packets matter,
there are cheaper ways to save enough state to recognize repeat
clients.  Neither the client nor the server need a 100 or 200 byte
TCB for DNS cookies.
https://tools.ietf.org/html/draft-eastlake-dnsext-cookies-03
With DNS cookies, servers don't need to save any state at all.  That
sounds better than expecting the roots to maintain millions of open
TCP connections.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external quereis to Open Resolvers?

2013-03-31 Thread Vernon Schryver
 From: Xun Fan xun...@isi.edu

 What we discuss here is for those administrators who are willing to do
 something to their OR. Look at what options they have
 now:
 1) keep open = DNS amp attackers are happy
 2) close = no one can query from outside

The idea that those are the only alternatives is as mistaken as the
idea that DNS/UDP packets forcing TCP would contain 512 bytes.

You could invalidate the idea that those are the only current alternatives
by noticing that Google's 8.8.8.8 is open, famous for a long time, and
not abused.  I've recently seen more than one reference to
https://developers.google.com/speed/public-dns/docs/security#rate_limit

You could invalidate the idea about 512 byte truncated packets by
looking at the last line of `dig` output or using wireshark, tcpdump,
etc., by simply understanding the DNS protocol, or by reading many of
the places where forcing TCP with TC=1 has been proposed.  To get an
example of a truncated response, provoke one from a server that uses
RRL and a non-zero SLIP value with `repeat 50 dig ...`.


Please read http://www.redbarn.org/dns/ratelimits and the pages
linked from there for another supposed panacea for intentionally
open resolvers that is not as obviously broken as TC=1 forcing TCP.
When you understand why RRL is not a general solution for open
resolvers (not to mention noticing that RRL includes using TC=1),
perhaps you will also see why TC=1 is not a solution for intentionally
open resolvers.
(That some people have reported good enough results with RRL on
their open resolvers does not redeem it for general use on open
resolvers.)


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Force TCP for external quereis to Open Resolvers?

2013-03-31 Thread Vernon Schryver
 From: Paul Vixie p...@redbarn.org

 also, in TCPCT there's room for a payload in the SYN.

In theory there was also room for a payload in the TCP SYN before
popular defenses against syn-flooding.


 in practice this means a normal three way handshake for the first
 connection between an endpoint-pair, but there's a single round trip on
 any subsequent connection between that endpoint-pair, involving one
 packet to send the request, and one or more packets to send the response.

 level -- i think tcp/80 could benefit from zero state cost in
 responders, and single round trip for request plus multipacket response,

 http://static.usenix.org/publications/login/2009-12/openpdfs/metzger.pdf.

 argue for TCPCT i'm arguing for it on the general principle that we'd
 like a responder to have proof of requester identity before sending a
 multipacket response. we would not use these powers to make OR ubiquitous.

That bit about mult-packet responses is critical.  Replacing 2 DNS/UDP
packets with 9 DNS/TCP or 9 DNS/TCPCT for an isolated request is
unprofitable.  However, if the DNS response is not a single =512 byte
UDP packet but a train of DNS/UDP/IP fragments carrying 2 or 3 KBytes, ...


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] FYI: SAC057 - SSAC Advisory on Internal Name Certificates

2013-03-15 Thread Vernon Schryver
  i certainly hope the reference to hr being a local or internal or
  non-unique name is a mistake and that CAs would absolutely refuse to
  issue certs for names that are the same as a really existing TLD:

 Not using FQDNs is foolish and unwarranted - and issuing certificates to
 match unqualified names is not improving the general picture.

 What I find more disturbing is this:
 ...

What I find even more disturbing is that people are still talking
about commercial PKI as if it it had been other than expensive
security theater for more than 10 years (i.e. since CA-2001-04).

Instead of spending effort on equivalents to arguing that a black
semi-automatic rifle is too dangerous for civilians but the same weapon
painted pink is ok, or that a molded hand grip makes a 2 inch knife
too dangerous for an airplane,
spend it on things with at least a little real world security
implications such as DNSSEC and eventually DANE.  

Don't waste time lobbying ICANN, but do urge browser vendors to
start using TLSA records.

It might be extreme and it's certainly unitentially offensive, but
a case can be made that no one writing from a domain without RRSIGs
on its MX and A RRs should say anything in public about network
security other than to ask about DNSSEC.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Odd MX queries

2013-03-11 Thread Vernon Schryver
 From: Daniel Stirnimann daniel.stirnim...@switch.ch

 I'm using the current BIND9 9.8.4 RPZ+RRL patch. It's completely evading
 DNS-RRL on the tld-nameserver where a lot of different query-names and
 the RCODE is NOERROR.

All of the domains in the first list in your previous message 
give me NXDOMAIN.

How is it evading the the BIND9 RRL referral limit on your TLD server?


 On the 2nd-level name-server the MX query rate is only about 120 qps. I
 guess it's too few queries to trigger my generous DNS-RRL config. I
 have response-per-second 20.

 For example, within 15 minutes 81 different query-names are sent. The
 domain which is queried the most is used 186 times within 15 minutes.
 That's way below the DNS-RRL config threshold. However, it's nothing
 which concerns me. As said, the abusive traffic on the 2nd-level
 names-server is quite low. On the tld name-server it was different.

Yes, 81 names/15 minutes is only about 0.1 qps.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Odd MX queries

2013-03-11 Thread Vernon Schryver
 From: Daniel Stirnimann daniel.stirnim...@switch.ch

 One error I made is that there are lots of different IP addresses
 sending these queries. The IP address 203.45.217.122 which I referred to
 in my original post sends about 50 qps but there are roughly 5800 other
 IPs sending this traffic as well. Some only one query within 15 minute
 but most something between 1 qps and 40 qps.

That's interesting.

 The few IP addresses which send more then my threshold
 (response-per-second 20) are rate-limited.

That's a relief.


If I were eager to repeat the very popular error of confusing guesses
with knowledge and facts, I might expound on botnets and spam and claim
that the increase in spam backscatter in my personal mailbox and the
~7% increase in spam reported to DCC are both real and related to what
you are seeing.
http://www.rhyolite.com/dcc/graphs/?BIG=1end=1363032000resol=1m
http://www.rhyolite.com/dcc/graphs/?resol=1wend=1363032000BIG=1
http://www.rhyolite.com/dcc/graphs/?resol=1wend=1361822400BIG=1

However, I've learned from many years of watching others make authoritative
sounding declarations about the what, where, why, and how of network
evil, and be immediately or sooner shown to be full of negative clues
(facts that are false).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Another whitepaper on DDOS

2013-02-25 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at

   But the errornous transfer of ebay.de would create a deasaster with DANE.
 
  In what way would DANE make the theft of a domain worse?

 In addition to vjs's points, note that DNSSEC makes theft of a domain even
 more visible because it is likely to cause horrible breakage for
 validating users.

I didn't mention those alarms, because I assumed the domain was
stolen at the registrar or in the registry so that glue and DS
records would be corrected by the adversary.  I didn't recall the
particular theft, but assumed it involved the common modes of seizure
by the registrar or the use of stolen credentials at the registrar.

Only if the theft is downstream of the registry such as in a master
authoritative server for the domain would DNSSEC raise alarms.  Those
alarms are valuable, but I didn't want to argue nits with people who
after much more than a decade and many public scandles, still haven't
twigged to the unredeemable fraud that is commercial PKI.

Never mind the irony in the likely fact that the use of stolen
registrar credentials would be protected (sic) by commercial PKI.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Another whitepaper on DDOS

2013-02-23 Thread Vernon Schryver
I wonder if DANE could have prevented Microsoft's recent difficulty
with expired SSL certs.
https://www.google.com/search?tbm=nwsas_q=microsoft+azure+ssl
Instead of an annual bout with internal purchase order and invoice red
tape and with red tape at the CA, could Microsoft have automated the
generation of certs and fingerprint TLSA RRs just as many automate
their generation of zone signing RRSIG RRs?
(Never mind that microsoft.com lacks RRSIG RRs.)

   ...

 From: Doug Barton do...@dougbarton.us
 
 Are there CA vendors who give out EV certificates for $fee + answer the
 e-mail? I know you can get basic SSL certs simply by answering the
 e-mail from the CA.

I can't find anything about EV verification from registrars.  Maybe
I'm blind and stupid, or maybe writing down what they actually do
would be too funny.

I suspect you might need to submit a government registration document
and answer a press-1-if-you're-human robo phone call.  You won't
forge the registration document, because the real things are so
cheap, easy, and unverified.

It's obvious nothing that I put in the online form other to get
http://www.sos.state.co.us/biz/BuildCertificate.do?masterFileId=20051118531
was verified other than the credit card number for the $1.00 charge.
(You might need to 'get' that URL twice.)
See also http://www.sos.state.co.us/pubs/info_center/fees/business.html
I've had DBA registrations in other states, and found them just as
unimpressive.

How would you interpret section 5 of
https://www.cabforum.org/EV_Certificate_Guidelines.pdf
to sell me a $1500 EV cert?
https://www.symantec.com/theme.jsp?themeid=compare-ssl-certificates
You couldn't afford to have someone to drive past my address to see
if it's a vacant lot, not to mention ask my neighbors if they've seen
anything shady or even ever seen me.  If you want to sell certs to
small businesses, then you cannot charge enough to do any checking.


 Not that look for the green bar is going to be a whole lot more
 successful than Don't say yes to security exceptions you don't
 understand, but I'm curious. :)

Yes, EV certs are expensive tickets for slapstick security theater.
Standards certs and the mailboxes (not SMTP but only for use after
you log into your GoDaddy account), theft protection, scanning, and
other hookum that GoDaddy sells are cheap seats.

(Your recent claim that all registrars up-sell the same junk as GoDaddy
is wrong.  I'm trust that all of the registrars you've seen are as you
said and like GoDaddy, but I've seen nothing like GoDaddy.  That might
be because I don't look at registrars that I've heard bad things about
or that advertise prices below what I know of their costs (e.g. registry
fees).  I know they'll more than make up their losses in ways I'm too
dumb to catch.)


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Defending against DNS reflection amplification attacks

2013-02-22 Thread Vernon Schryver
 From: Joe Abley jab...@hopcount.ca

 If you can describe BCP38 deployment in a non-trivial network such
 that deployment is to the benefit of shareholders and non-deployment
 is not, I'm all ears. Absent regulation and punitive fines for
 non-compliance, I don't see it.

Civil lawsuits by victims of DNS reflection and other attacks that
depend on failures to deploy BCP38 might help convince boards of
directors.  It might help to take up a collection to help pay the
legal fees a victim sueing one of those non-trivial networks.
I've the vague impression that kind of fund raising is illegal.

I've learned to avoid using the word fine in a different but related
context.  I have long claimed that ESPs (bulk mailer for hire) could
practically stop the large amounts of unsolicited bulk email that they
send by fining their customers with dirty target lists.  A $100 fine
for each spam complaint verified by the ESP (maybe only after the 5th
complaint and maybe capped at $5,000) would practically stop the ESP
spam sent toward my personal mailbox and to my spam traps feeding DCC.
A representative of a major ESP insisted in public that my claim
is nonsense, because it is illegal (sic) for an ESP to fine its
customers.  Because ESPs are private enterprises, that might be
literally true.  It's also a lie because ESPs could say cleanup
fee or spam complaint processing fee instead of fine without
reducing the disincentive for purchased, harvested, re-purposed,
or other dirty mailboxes in target lists.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Another whitepaper on DDOS

2013-02-22 Thread Vernon Schryver
 From: Lutz Donnerhacke l...@iks-jena.de

 But the errornous transfer of ebay.de would create a deasaster with DANE.

In what way would DANE make the theft of a domain worse?

Without DANE, the new possessor of a domain need only get SMTP working,
create a new cert, apply for signature for a new cert, answer the email
from the CA verifying ownership of the domain, and start using that
new cert on new HTTP servers with improved web pages.

With DANE, only a few things differ.  One difference is that the
new cert can be used as soon as DNS TTLs allow without waiting to
answer ownership-verifying email from the CA.  The second difference
is that before and after the transfer, browser users can be more
confident that the web pages they see are unchanged between HTTP
server and HTTP client.

In no case can you be sure that ebay.de is what you assume it is without
some sort of out-of-band exchange of keys and secrets between you and
ebay.de.  Paying a CA $500 cannot buy more than $500 worth of identity
checking and authentication, and that cannot penetrate more than $500
worth of smoke, mirrors, forged business licenses, etc.  $500 is plenty
for a hobby domain but ridiculous for an eBay.  (Never mind the free CAs.)
Commercial PKI verifications of the identities of strangers have always
been frauds and snake oil sold to punters.  That commercial PKI fees
have always been too small to allow honest identity checks even for
organizations more famous than Ebay was proven more than 10 years ago.
https://www.cert.org/advisories/CA-2001-04.html
http://technet.microsoft.com/en-us/security/advisory/2524375


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Another whitepaper on DDOS

2013-02-21 Thread Vernon Schryver
 From: Jeff Wright jwri...@isc.org

 http://docs.media.bitpipe.com/io_10x/io_106675/item_584633/Gartner%20and%20Arbor%20Focus%20on%20DDoS%20FINAL.PDF

On one hand, it
  - gets significant bits of history wrong, such as the claim that
 SSL had nothing to do with authentication and authorization
 until EV certificates.  If confidentiality (encryption) were
 the sole point of SSL, then SSL would have gone straight to a
 DH exchange and done no public key computing.  EV would not be
 a minor elaboration of the old, widely used PKI.  (page 15)

  - urges the use of DNS Authentication.  I guess DNS authentication
 [would work] to ensure that source queries to a DNS server ...
 are in fact coming from a valid host if you can find and deploy
 DNS stub resolvers that support DNS authentication and then deploy
 them.  I think that's practically impossible for the forseeable
 futgure.  It might instead be referring to ACLs in servers and
 relying on IP source addresses as authentication tokens, but that
 would be almost as lame.  (page 6)

  - advocates naive and so bad query rate limiting and separate
 NXDOMAIN rate limiting.  It should have mentioned RRL.  (page 6)

  - advoctees applying RexEx's and packet capture for no purpose.
 Looking for text in DNS packets will find lots of it separated
 by what look like ASCII control characters.  Unless you have a
 specific target, you're unlikely to do more than waste time by
 manually staring at packets for any port.  (page 6)

On the other hand, those are all minor nits and mostly reflect
my prejudiced and overly strict reading.

Overall, I found it innocuous and entertaining.
If it seems revolutionary or eye opening and you have relevant
responsibilities, then you urgently need more than any such document
can offer.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] RRL specified in a stable place?

2013-02-04 Thread Vernon Schryver
... an Internet-Draft followed by an RFC would be Really Helpful.

 What track do you expect this to go along?  Is this a DNSOP draft?

Those are best boring details except to those with non-technical
interests in the IETF.

 Because the implementations are really just a way of using existing
 parts of the specifications in creative ways. 

That covers a lot of RFCs.

(They're also risky for
 some operators. 

That covers a lot of RFCs.

  Consider that, if you spoof $ISP's resolver addresses
 and perform one of these attacks, then $ISP gets at least degraded
 service during the rate limit period.

Perhaps I misunderstand, but I think that's wrong in general and based
on the persistent and by now very irritating confusion between client
rate limiting and response rate limiting (RRL).  In addition, ISPs
have reported that installing and turning on RRL has restored DNS
service that had been degraded by apparent DNS refection DoS attacks.
While your DNS servers are trying to respond to Mqps of bogus requests,
they are often not only flooding the DoS victim but also not getting
out other responses.


 it's not a panacea either,

That cover a lot of RFCs.

and certainly cannot be considered a BCP
 for all use cases.)

Ok, so don't make as a BCP.  Let it be Informational.  Or don't advance
it after publishing it as an I-D.  Keeping change control out of the
IETF will not slow the spread of the idea or harm interoperability
(which in the old days was the reason for RFCs).

If you think not letting the IETF improve and then freeze the
specification will lead to fragmentation and disparate implementations,
then you should oppose an RFC.  Without an RFC, there is more room for
better ideas.  Because there is no directly involved on-the-wire
protocol, there is much less need for an RFC.

Personally, I think it would be nice if it were published at least as
an I-D ensure that the idea reaches more potential implementors.  However,
it would not be a big deal if the IETF doesn't want it even as an I-D.


The one thing the IETF really should do (if it has not become
interchangable with the ISO/ITU/UN) is to add two check list items
before advancing future protocols:

   - all servers MUST deal appropriately with excessive requests such 
  as by rate limiting by client, network, request, and/or type of
  request.  This is particularly important for services that
  do not require long lived state.

   - all clients MUST rate limit their requests, both retries and de novo,
  including using at least exponential back-off.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Monday rant againt the uses of the Public Suffix List

2013-01-21 Thread Vernon Schryver
 From: Warren Kumari war...@kumari.net

  Continuing the sarcasm is too much effort, so I'll simply ask why not
  do DNS MX and A requests?  (both because of the fall-back-to-A-if-no-MX

 Please sir, if I run www.images.example.co.uk, can I set a cookie
 at images.example.co.uk? How about example.co.uk? Fine Now .co.uk?

If you are running www.images.example.co.uk, then you should know
all there is to know about cookies at www.images.example.co.uk any
other domains at which you might legitimate want to set a cookie.

If you are an HTTP client implementor, then I think you should implement
disable third party cookies with the single obvious, fast, simple,
and--if you like--simplistic comparision without needing to check any
PSL lists.  You should also make disable third party cookies on by
default.


Yes, I am among the many who consider third party cookies at best
undesirable and generally willful and knowing attempts to sell or
otherwise violate our privacy.

Yes, I've occassionally encountered web pages that apparently
legitimately use third party cookies (i.e. without obviously trying
to violate my privacy).  I cannot recall any cases where those web
pages could not and should not have used other tactics.

Yes, I know all HTTP server operators values my privacy.  However,
the values that spammers, advertisers, governments, and other snoops
place on my privacy differ from mine.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] responding to spoofed ANY queries

2013-01-16 Thread Vernon Schryver
 From: Frank Bulk frnk...@iname.com

 Perhaps the ratio could be a dynamic whitelist -- if it's 1.5 or less, then
 allow the response to go out.

What would be gained by spending the code complexity and CPU cycles
such a mechanism would require?  What bad things would be avoided
or good things achieved?

(Please do not mention false positives, because that notion of false
positive is irrelevant and does not happen with RRL.)


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Can you force your IPv4/v6 DNS server to return v4 responses only on recursive lookups

2013-01-15 Thread Vernon Schryver
 From: Patrick, Robert (CONTR) robert.patr...@hq.doe.gov

 We need an option like this `break-dnssec` feature to use RPZ for
 stopping user access to DNSSEC-signed domains that are on a block list.

How should it differ from the break-dnssec yes/no modifier for the
response-policy{} statement mentioned in the ARM for BIND 9.9 and 9.8?

Look for break-dnssec in
http://ftp.isc.org/isc/bind9/cur/9.8/doc/arm/Bv9ARM.ch06.html
or
http://ftp.isc.org/isc/bind9/cur/9.9/doc/arm/Bv9ARM.ch06.html

There is a single break-dnssec bit for each view.  It seems likely
that those who want to break DNSSEC with RPZ want to do it for the
entire view.  In addition, the rules precedence rules (and code) for
choosing which polizy zone to apply are already too complicated without
a separate break-dnssec bit for each policy zone.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] responding to spoofed ANY queries

2013-01-13 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

 I suspect you're misunderstanding what I'm saying ... 

 Yes, it is yet another form of security theatre, but when has
 that stopped anyone?

Yes, I misunderstand your position as the same as others'.

 However, I'm pretty sure this isn't appropriate fodder for dns-operations...

Perhaps not, if the supposed lack of laws is not an excuse for DNS
recursive server operators to keep them open and for authoritative
servers to refuse to install some kind of rate limiting.  The many
years of stop bothering me, spam isn't illegal responses from
operators of open SMTP relays and other spam-critical services make
me wonder.  There are many open recursive servers and authority
servers without rate limiting or with RRL manually disabled except
for the previously common flavors of attack.




] From: Frank Bulk frnk...@iname.com

] If the problem is amplification, why not only perform RRL on only those DNS
] communications exchanges that have certain amplification factor (i.e. 1.5).

That sounds nice but has problems.  The main one for me is that
you'd have wait until the response has been marshalled before
determining it size and deciding whether to drop it.  That seems
to me harder to code in BIND9 and more expensive in CPU cycles.

A better reason is that simple A requests are much smaller than typical
non-DNSSEC responses.  `dig iname.com @204.74.108.1` sends 38 bytes
and receives 225 for 5.9X amplification.  5X is not as flashy as 30X,
but is a big problem.  5X is a lot more than your 1.5X and so in
practice you would rate limit all responses.  If you always do it,
you might as well do it in the cheapest way possible, before knowing
and regardless of the size of the response.

Even 1X or no amplification could be useful to a bad guy wiggling
through firewall holes or obscuring an origin.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] responding to spoofed ANY queries

2013-01-12 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

  The tool is too tempting and potentially effective for too many government
  projects ranging from national hostilities to operations by law
  enforcement against criminals to expect governments to entirely
  support BCP38 or even allow its complete deployment.  This is like
  the prospects for governments and politicians limiting their own spam.

 A possibility but I've not yet reached that level of cynicism. I
 suspect that when there is a sufficient demonstration of the effectiveness
 of source address spoofing against government or infrastructure targets,
 laws will suddenly appear that require ISPs to take steps to ensure
 the traffic they source has appropriate source addresses, just as laws
 appeared to allow lawful intercept of traffic.

Wouldn't spoofing against government or infrastructure targets invoke
the Patriot Act and other terrorism laws?  Would an ISP that hasn't
deployed the recommend, available and official standard measures to
prevent such attacks be an accomplice in a violation of the CFAA?

The laws mandating support for wiretaps are in the opposite direction,
because they mandate support for network abouse.

Laws requiring that all routers support one or more of the BCP 38
mechanisms sound rather late and redundant and wouldn't do much to
make ISPs turn them on, especially given the occassional perfectly
legitimate situation where simple ingress filtering is wrong.

More relevant than CALEA are anti-spam laws and the current noise about
Iran being the source of recent reflection attacks.  (Never mind whether
that noise true this time or is merely more lies and FUD from the usual
suspects and beltway bandits.)  Everyone with experience in the spam
realm knows how impotent the anti-spam laws have been.  Even if someday
one nation after all these years of broken promises really does outlaw
unsolicited bulk email, there will still be plenty of others that
won't.  Why doesn't the same dire problem affect laws against all forms
of network abuse including IP header forgery?

Then there is the enforcement problem.  Would you have DHS inspectors
checking compliance?  Would they spot check cages in data centers,
consumer access routers, and so on and so forth?  That sounds like a
bigger job airport security.  Would the inspectors be as competent,
trustworthy, and educated as TSA inspectors?

A common response reaction at this point is something about the civil
courts.  Why haven't the targets of the recent reflection attacks sued
anyone?  All authority servers that are not negligent should by now
be doing something, whether RRL in BIND or NSP or operators standing
by with axes.  Reflecting recursive servers have no excuse besides
desires to make money cheaply.  I suspect some of the ISPs of the
sources of the forged requests have been identified, but I've not heard
of any court cases against ISPs.  Besides the lack of action from the
victims, there are the lessons of spam history.  You won't find any
signs of the civil legal victories of AOL and Earthlink in charts of
spam volume.  Unless Spamford Wallace goes down on electronic mail
fraud, intentional damage to a protected computer, and criminal
contempt, will he ever really retire?
https://en.wikipedia.org/wiki/Sanford_Wallace


  IP source address forging is like spam.

 Not really.  Spam doesn't affect anything except email.  Source
 address spoofing can affect _anything_ on the Internet.

Even if we agreed that spam affects nothing but email (we don't), we
should learn the lessons of the spam war both in general and in the
effectiveness of laws on such problems.  That there would be fewer
interests trying to water down a BCP 38 law into equivalents of CAN-SPAM
is irrelevant, because most spam is and has been illegal since CAN-SPAM
was signed.

In the real world, the phrase covering laws against cybercrime
is security theater.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] responding to spoofed ANY queries

2013-01-10 Thread Vernon Schryver
 From: Casey Deccio ca...@deccio.net

 I'm not familiar with other RRL behavior, but to provide some numbers for
 BIND's patch:  All responses to rate-limited queries are truncated, but
 default behavior is to withhold response altogether for only 1 out of 2
 such queries (50%). 

As Paul Vixie often says, the goals of RRL do not include stopping
DNS reflection DoS attacks but:
  1. attenuating reflection attacks so much that the attacker would
  do more damage to the victim by sending the bogus DNS requests 
  directly to the victim
  2. not letting the attacker deny DNS service to the victim by
  failing to answer the victims real requests.

Goal #1 is achieved by attenuating or sending fewer bits toward the
victim than the attacker sends to the DNS server.  With slip 2, the
attacker's bits are reduced by about 50%.  The attacker would do twice
the damage by sending toward the target instead of the reflector.

Goal #2 is approached with slip hack and by rate limiting responses
instead of clients.  The victim's DNS requests are unaffected unless
they are for the same name and type as the attacker's forged requests.
Goal #2 is practically reached while the attacker avoids the major
query types.  If the attack uses a major type such as A, then we rely
on the probability of at least one of the victim's requests getting a
slip or TC=1 response.  It helps that the victim need only get one
response per DNS cache lifetime.


 depends on the query rate.  The statisticians might provide a good rule of
 thumb for reasonable response rate given query rates, but it seems like 50%
 is in fact a good starting place.

With slip=2 and the victim trying and retrying a total 3 times, the
probability that all of the victims responses will be dropped is
0.5*0.5*0.5 = 0.125.  That makes the probability that the victim
will get a response despite matching the DoS flood about 88%.  That's
not perfect, but not bad.  If it's a mail system that will retry a
few times or a user at a browser that will manually retry a failed
page, it gets even better.


 The BIND RRL patch also has an option for scaling down the slip value
 (which dictates response rate) in the presence of increased query rates.  I
 haven't had time to play with it, but the idea is smart.

The impression that the slip value can be scaled down using the gross
qps rate comes from an error of mine the documentation.  Only the real
rates can be scaled.

I've proposed that the 'slip' value be scaled up the qps ratio or the
square of the qps ratio to keep the TC=1  responses/second rate constant.
On the other hand, any reduction of TC=1 responses (i.e. increase in
slip) reduces the reason to have slip.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY requests from Amazon?

2012-12-18 Thread Vernon Schryver
 From: Stephane Bortzmeyer bortzme...@nic.fr

   While rate limiting by client IP address stops
  a reflection attack, it also turns off almost all DNS service to the
  client from the server.

 No one in his right mind limits simply by the client's IP
 address. People typically also use the type of the request (today,
 typically ANY). See my mini-HOWTO for Linux Netfilter in this thread.

That tactic makes limited sense if you assume that the bad guys are
too stupid to see that they can bypass ANY filters with almost as
much amplification with other query types such as A.

`dig +dnssec www.nic.fr @ns1.nic.fr` offers amplification of more than 25X.

 +

] but my point is that it works *today*, with *actual* attacks. So, it
] definitely helps but keep your eyes open, have alternative solutions
] in place and do not put all your eggs in one basket

Yes, automated response rate limiting (RRL) is too small a basket
to hold all of anyone's eggs.  However, a static ANY filter amounts
to trying to keep all of your eggs in a handkerchief.  Manually
maintained iptable rules are akin to hiring jugglers keep all of
your eggs in the air.


   (nobody asks ANY
 isc.org in the real world, except the attackers).

The common claim that no one uses ANY is so overstated that it is
false.  I use ANY to diagnose real problems in the real Internet.


 I appreciate the BIND RRL patch and it is obvious to me that we must
 continue the research in dDoS mitigation, but let's not drop the
 mitigations techniques that work *today*. (The attackers are not
 superhuman, they use imperfect techniques.)

That's quite true, but advocating defenses such as static ANY filters
or manually installed iptable rules is like avocating homeopathy in
the fight against malaria.
The suggested iptable tactic would be better if the phython program
somehow automatically installed iptable rules.  However, that might
be too effective.


 In actual deployments, some people may be unwilling or unauthorized
 (corporate policy) to install unofficial patches on a production
 server. That's why we should not reject blindly the OS-level rate
 limiters (see my mini-HOWTO in this thread).

A third party's iptable rules (not to mention a third party's program
that generates the iptable rules) should get as much scrutiny as
unofficial patches for BIND or NSD.  Neither iptables hacks, a third
party's phython program to generate iptables rules, nor DNS server
patches (even if official) should be used lightly.

The BIND patch includes a standard test suite in bin/tests/system/rrl


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY requests from Amazon?

2012-12-18 Thread Vernon Schryver
 From: Dobbins, Roland rdobb...@arbor.net

 Sure, but RRL isn't the issue; it's all the rest of what 'application
 firewalls' do which causes them to choke.  I've yet to see one which
 doesn't choke under even moderate DDoS, and have never seen one which
 implements any form of classification in a stateless or minimized-state
 manner.

It's well known that Roland Dobbins doesn't think much of application
firewalls or stateful firewalls in general.  I also don't think much
of application firewalls, and not only because the FUD that is much
of their brochures, the never ending broken vendor promises, or the
exaggerated performace.  I've been grumbling since tcp wrappers first
appeared that application firewalls are usually poor bandaids for
stupid application security holes that could (and should) be more
securely and cheaply fixed in the applications.

But all of those criticisms are irrelevant to what hypothetical firewalls
might do for current and foreseeable DNS security issues.  That currently
popular firewalls can't cope or do only stupid stuff like ANY filtering
doesn't justify rejecting firewalls for reflection attacks on principle.

Besides, DoS attacks on DNS servers themselves (as opposed to using
DNS servers to attack others) are best handled outside in smart (e.g.
sane state table management) application firewalls.  It's not good for
a DNS server to discard excessive (relative to the server's own
resources) requests.  By the time a request can be discarded by the
server, too many local resources have been burned.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY requests from Amazon?

2012-12-17 Thread Vernon Schryver
 It's starting to look like per-client-IP rate-limiting features
 are necessary...

 There is a patch available for rate-limiting inside BIND.

There is also RRL code for NSD.

Please note that the main thrust of the BIND and NSD rate limiting
code is response rate limiting (RRL) and *NOT* per-client IP address
rate limiting.  Per-client rate limiting is generally the best that
can be done with simple firewall rules or access control lists, but
has limitations and can cause harm.  While rate limiting by client IP
address stops a reflection attack, it also turns off almost all DNS
service to the client from the server.  Temporarily denying name service
to a target has long been a major part of more serious security problems
than denials of service.  For example, if you need to fool your target
about the IP address of www.example.com, it's handy to have the several
seconds of a full set of DNS client timeouts to try many DNS transaction
IDs instead only the milliseconds before the real answer arrives.

With RRL (especially with the slip feature), the victim of a reflection
attack often sees no change in DNS services from the rate limiting
server during a reflection attack.  With client IP address rate limiting,
the server stops answering practically all requests from the victim.

The current version the BIND RRL patch does have support for
per-client rate limiting, but it exists only to satisfy popular
demand.  Its use is a bad idea in most cases.


I've said something like this before but I keep seeing claims that
BIND rate limiting is harmful or bad based on the mistaken notion that
it limits requests by IP addresses instead of limiting responses by
{IP,qname,qtype}.

The other common claim about RRL is that it is too expensive.  Never
mind that much bigger servers are using RRL than the servers run by
people expressing that concern.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY requests from Amazon?

2012-12-17 Thread Vernon Schryver
 From: Patrick, Robert (CONTR) robert.patr...@hq.doe.gov

 I don't disagree that limiting responses is a smarter tact than
 limiting requests, with respect to making an informed decision prior
 to discarding traffic.  Having zone and query-type plus response
 data to evaluate the client hash is more information than looking
 only at source and destination IP address, as may be implemented
 at a firewall or within the O/S.  Some of these data elements could
 also be tracked by an application-aware firewall.

Yes, you could do response rate limiting (RRL) within an application
aware firewall by have the firewall do almost of all of the work
of your DNS server.  For example, your RRL mechanism (whether in a
firewall or DNS server) must count all NXDOMAIN responses to a given
IP address as identical to avoid spewing GBytes/sec of big signed NXDOMAIN
responses about distinct random, invalid domains.
`dig +dnssec asdf1234asdf.com @a.gtld-servers.net` gives a 1K NXDOMAIN.
Referrals have a similar issue.

A firewall that is as DNS aware as that should not waste the computing
it has done to know whether to discard the response it computed to
count.  If things are ok, it should go ahead and send the response.

Never mind that consistency, maintenance, and other problems that
always come with duplicating things, whether definitions of constants
in code or the big chunks of code and data that are a modern DNS server.


 ...
 Allow administrators the freedom to set the limit to any value and/or
 disable the feature, but shipping the product with a smart default
 may be viewed as a pragmatic step forward in noise reduction.

The right RRL value depends on each server's popularity.  It might be
reasonable to ship DNS software with a default rate limit suitable for
modest servers (e.g. 5 or fewer responses/second) and expect big server
operators to make adjustments.

 Continuing to deploy into the wild without any rate-limiting isn't
 the best approach long term.

Neither is tolerating unnecessary open recursive servers and ignoring
BCP-38.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] First experiments with DNS dampening to fight amplification attacks

2012-10-26 Thread Vernon Schryver
 From: Dobbins, Roland rdobb...@arbor.net

 this sounds like a new application of 'the chemical polluter business model'.

 There's more to it than that, though.  It's important to understand
 that those who are purchasing and deploying network gear often are
 nonspecialists, and so frustrations, project delays, etc. would
 crop up in the customer organizations - who would then complain...

but that *IS* 'the chemical polluter business model' which
is also the spam problem which is also the tragedy of too many
sheep on the commons.
It's cheaper and easier in the short term to pollute, ignore spammers,
and over graze the commons.  The bosses of the shepherds, abuse desks,
and refinery engineers hear only about the costs and problems of not
overgrazing, terminating profitable accounts, and not polluting.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Massive DNS poisoning attacks in Brazil

2012-10-03 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at
 To: Paul Vixie p...@redbarn.org

 Paul Vixie p...@redbarn.org wrote:
 
  in http://www.ietf.org/mail-archive/web/dnsext/current/msg11700.html i
  was thinking that we'd add send chain as an edns option, and then add

 I like this plan.

All of those DNS tunneling, triggering, alternate port, and other
varient protocol schemes for dealing with hotels and public access
points attacks on DNS are either unnecessary in the long run or depend
on practically no one ever using them.  They are like the ad hoc schemes
subscribers to this mailing list use to tunnel other protocols home.

Any popular scheme that works around DNS, HTTP, ssh, etc.
man-in-the-middle attacks that become popular will be blocked,
proxied, or hijacked unless most users normally use tools that
detect and refuse to work with men in the middle.

If the browsers and stubb DNS servers of most users did DNSSEC, DANE,
and HSTS, then any men in the middle will be obvious and won't be
installed except for purposes that users tolerate including access
point login, employment behind corporate firewalls, and living under
authoritative regimes.  In addition, those tunneling schemes will not
unnecessary.

To put it another way, if HTTP replaced IP as the Internet protocol
without any real improvements in end to end security, then the
censors and hijackers would apply their tools to HTTP.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Massive DNS poisoning attacks in Brazil

2012-10-03 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at

 You are right about dicking around with port numbers and TLS or HTTP
 framing. However the send chain EDNS option would be a widely useful
 operation for validating stubs.

 A stub validator could perhaps send DS and DNSKEY queries for all the
 truncated versions of the name between the target name and the root, which
 it would have to do concurrently to avoid latency pain, but then it will
 have to iterate this to deal with CNAME and/or DNAME chains. The recursor
 has already done all the work so it would be nice to get all the results
 back in one go.

That's a good point, except I can only go with somewhat useful.

On http://www.cnn.com/ just now I see only www.cnn.com, i.cdn.turner.com,
i2.cdn.turner.com among about 33 images and icons.
Getting the DNSSEC chains for those half dozen DNS names (I probably
missed some and I disable most javascript) would save only a trivial
few round trips for a stub with a cache given the round trips to
fetch those images (and javascript).
Besides, the saved round trips would be to the nearby trusted server 
that should be answering within 50 millseconds and closer and faster
than the CDN box serving the content,
not to mention web sites not served by the CDN box.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Massive DNS poisoning attacks in Brazil

2012-10-03 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

  Any popular scheme that works around DNS, HTTP, ssh, etc.
  man-in-the-middle attacks that become popular will be blocked,
  proxied, or hijacked unless most users normally use tools that
  detect and refuse to work with men in the middle.

 You're assuming the MITM attacks are intentional. 

No, I assume only either that the men in the middle will back off if
they irritate enough users or that they can be detected.
(Never mind corrupt DNS registrars or registries attacking DNSSEC.)

   My impression
 is that the majority of the issues in getting EDNS0-requiring
 protocols to work are due to ignorance, e.g., valid DNS responses
 are always UDP512bytes or valid DNS types are {A,MX,SOA,NS,PTR,TXT}.
 If this is true, than egregious hack workarounds like using HTTP/S
 as a transport will solve most of the problem (not that I think
 this is the best solution).

If DNS/TLS/HTTP became popular, then the same actors that filter
DNS/UDP for 512 bytes or less common types would have the same
motives to do the same to DNS/TLS/HTTP.  To filter 512 bytes or
RRSIG or TLSA records, you must be looking at the bits.  Breaking
DNS is not accidental, not even with NAT.  The reasons that require
or allow you to do whatever you're doing to DNS/UDP/IP would apply
to DNS/TLS/HTTP if DNS/TLS/HTTP were popular.

On the other hand, if many user computers have validating stubs that
compute SERVFAIL for broken DNSSEC and so make gethostbyname() in
applications fail, then many users will yell at hotel concierges for
$15/day WiFi that doesn't work and use LTE instead of paying $15/day.
Many hotels would change and allow EDNS0 after the sign-on.  Employers
would either do the same or point to conditions of employement.  State
actors would either do the same or send whiners to gulags.


 

} From: Andrew Sullivan a...@anvilwalrusden.com

} I see.  So your model is that the application asks for a TLSA record,
} and if it gets one then it can infer that the record also passed
} validation?  

} How can the application be sure the resolver is
} DNSSEC-aware?

The important answer is the same way the application can be sure
the resolver is not some other kind of malware.

The trivial answer is in the API used by the application to get TLSA
records.  For gethostbyname(), HOST_NOT_FOUND in h_herrno is plenty
good enough for the SERVFAIL that comes from failure to validate A or
 records.  For other record types, you need either the record set,
the empty record set, or a half bit of a failure flag.  Applications
do not now and will never care whether a failure is due to any of the
myriad of reasons for getting a SERVFAIL or REFUSED DNS DNS response,
including the new reason of failure to validate.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Massive DNS poisoning attacks in Brazil

2012-10-02 Thread Vernon Schryver
 From: Andrew Sullivan a...@anvilwalrusden.com

 I don't think this is the problem at all.  The problem is that even if
 you can get that out at the end point (and I can, using DNSSEC
 Trigger), it does you no good because your application _can't tell_
 what happened.  If I'm a web browser programmer, I want to be able to
 know whether the DNSSEC validation worked before I start using the
 TLSA record.  Today, that is too much work (and probably reduces to
 implement a resolver in the browser).

Browsers are certainly not the only application, even if it is true
as Paul Vixie recently said that the Internet is little more than the
web for most connected computers (e.g. phones).  Writing DNSSEC
validation code for every application that depends on accurate DNS
data would be as crazy as not using libraries and daemons for other
local authentication and authorization.

The only reasonable solution is to give stub resolvers some of the
features of recursive resolvers including DNSSEC validation and caching
to make the costs of DNSSEC tolerable.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Massive DNS poisoning attacks in Brazil

2012-10-02 Thread Vernon Schryver
 From: Paul Vixie p...@redbarn.org
 To: David Conrad d...@virtualized.org
 CC: Vernon Schryver v...@rhyolite.com, dns-operations@lists.dns-oarc.net

  The only reasonable solution is to give stub resolvers some of the
  features of recursive resolvers including DNSSEC validation and caching
  to make the costs of DNSSEC tolerable.

  Why not get rid of stub resolvers completely and simply use recursive 
  resolvers?

I think the code to parse the BIND9 configuration grammar and nothing
more would be excessive and grotesque.The code to support all of
that stuff would be obscene.
As far as only DNSSEC is concerned, you don't need a lot of the
complications that a real authority server needs.  (e.g. special NSEC3
database trees or lists to make big zones less slow.)

Of course, if the only available code for your situation is BIND, then
you could use BIND with a tiny configuration file.  The package would
be smaller than current Firefox binaries that send me running and
screaming in horror.


 there's an urban legend about how the authority servers depend on
 caching by intermediate recursives and that if every end system had its
 own recursive server on board the authorities would melt.

 real traffic it might get the dreck percentage down to 80% but it
 wouldn't melt anything.

No matter how over-provisioned authority servers are, I don't understand
why making stubbs more like real resolvers should increase traffic to
authority servers.  Why couldn't you do the equivalent of moving the
DNS servers named in the system's equivalent of /etc/resolv.conf to
the equivalent of a BIND forwarders{} statement and putting localhost
into resolv.conf?

A full featured DNS server can't bypass men in the middle any more
than a bare bones DNSSEC validating caching forwarder.  There's no
security reason to go to the real authority servers if your local DNS
servers are corrupt.  The bad guys who corrupted them can attack your
DNS traffic going outside.  All you can reliably do is detect evil,
and only if you can somehow get the root key.  Detecting evil is often
enough of the battle.  In many (but certainly not all) cases, the bad
guys react to sunshine like other vampires.  In the other cases,
you can choose to not play the game by their rules or at all.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] First experiments with DNS dampening to fight amplification attacks

2012-09-28 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Matth=E4us_Wander?= matthaeus.wan...@uni-due.de

  Hmmm for authoritative servers, we might also emit a CNAME challenge.=

  We could encode the encrypt the correct destination in the CNAME, for A=
  and
   this is trivial. If you come back to resolve
  encoded-12.32.43.43.attackeddomain.com, you get 12.32.43.43 etc.

 There has been recently a patent granted with this method:
 http://www.freepatentsonline.com/8261351.html

 Though they don't use it do decide about blocking,

Is that because converting a reflected flood of DNSSEC signed
responses to a reflected flood of DNSSEC signed challenge CNAMEs
is not an impressive defense for DNS reflection attacks?

Never mind that packet losses during an attack can increase and so
doubling the number of packets that must succeed for a legitimate
DNS/UDP transaction is unlikely to be helpful.


but use the CNAME
 challenge on every query, still providing a small amplification. This
 comes at the risk of running into resolver issues with NS or MX records..=

and resolver CPU loads for DNSSEC signatures for all of those
synthetic challenge CNAMES during an attack.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] How many kinds of DNS DoS attacks are we trying to stop ?

2012-09-27 Thread Vernon Schryver
   DNS DDoS amplifier resistance: as a thought, would it be a reasonable
   step, not hurting interop, to have an authoritative DNS server process
   a UDP-based ANY query by including, at most, an MX and any A responses
   in the ANSWER section and setting the TrunCated bit of the response if
   there were any other records skipped?

Try some experiments to see if what kind of amplification you can get
without ANY.  I see about 20X from `dig +dnssec asadfasdf.com`

If your defenses handle non-ANY attacks, then what do you gain from
doing anything about in particular about ANY except more code, more
bugs, more CPU cycles, and fewer queries/second?

Doing any special for ANY queries makes as little sense as filtering
all ICMP packets.

Why is it that so much of computer security is based on the insane
assumption that everyone else and especially adveraries are stupid?
There is always an easy solution to every human problem--neat,
plausible, and wrong.  --H. L. Mencken


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] How many kinds of DNS DoS attacks are we trying to stop ?

2012-09-27 Thread Vernon Schryver
 From: Olafur Gudmundsson o...@ogud.com

 ...
  If a traffic reducer turns on TC bit in its responses, then if no 
 TCP connection is completed during the next N seconds,
 the reducer can go to full drop mode.

Should the DNS RRL patch stop slipping truncated (TC=1) responses
if it seems that no TCP requests have been seen from the CIDR block
within window seconds?

 pro:
  - it would help answer concerns about contributing to the DoS attack,
 because some of the slipped responses are to forged requests.
  - surely some DNS reflection DoS CIDR block targets lack DNS
 servers and the truncated responses only harm them.

 con:
  - it's not strictly necessary and might not be justified by its
  code and potentical bugs.
  - the truncated responses are infrequent and small enough that
 they might not matter.
  - small reflection DoS targets might be sending fewer than 1 request
  per window seconds, and so would miss the false positive mitigation
  effects of the truncated responses.
  - even large reflection DoS targets might be sending fewer than 1 request
  per window seconds to most DoS reflectors and so would miss the
  false positive mitigation effects of the truncated responses.
  - for obvious as well as obscure implementation reasons, the TCP seen
  indicator would have a few errors in the none seen direction.

I've a detailed sketch of the necessary changes to the code, but
I'm inclined to forget them.

Opinions should probably be expressed in the RRL mailing list at
ratelim...@lists.redbarn.org or
http://lists.redbarn.org/mailman/listinfo/ratelimits
instead of the dns-operations mailing list.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] How many kinds of DNS DoS attacks are we trying to stop ?

2012-09-27 Thread Vernon Schryver
 trust you have looked at other imperfect solutions such as rate limiting
by (qname,qtype,IP address).  In your estimation, how does that compare
to trimming ANY and DNSSEC responses?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] correction about RRL leakage

2012-09-26 Thread Vernon Schryver
 From: glen wiley glen.wi...@gmail.com

 This seems like a degenerate case to me...there is a threshold below which
 attacks
 are no longer meaningful.  For most name servers I suspect that an attack
 is only
 interesting at some rate well above 10's of qps.

The DNS RRL is less about defending a DNS server than the victims of
the server.  Only small or at most modest DoS attacks on a name server
would be helped by dropping responses.  One of the most effective
family of DoS attacks against a name server is explicitly not addressed
by the DNS RRL code.  (There's no profit in enumerating attacks against
DNS servers themselves or flogging their details here.)

DNS RRL is mostly about mitigating DNS amplified reflection attacks
in which an attacker bounces packets off DNS servers toward the
real target and the DNS servers reflect or send many more bits
toward the real target than they receive from the attacker.

For example, a request for a DNSSEC validated A record for asdf.isc.org
from a recursive resolver sends about 14 times as many bytes (~700)
toward the supposed source than were in the original request (~50).


 As a name server operator not only am I not likely to see anything odd in
 an attack
 like that, I really don't have the time or inclination to care about
 volumes in that
 range.

My DNS servers are certainly not what I'd call busy, but I'd probably
not notice an extra 100 qps for days.  However, a bad guy could send
each of 1000 DNS servers 100 41-byte queries forged from 10.2.3.4 per
second for a total of 32 Kbit/sec.  Each of those requests would
normally result in about 700 to more than 2000 bytes depending on the
query.  10.2.3.4 would see 0.6 Gbps to 1.6 Gbit/sec.

A discouraging fact is that rate limiting doesn't help if the bad guy
uses a list of 100,000 or 1,000,000 servers and only 1 or 0.1 forged
query/sec.  The only hope is that by the time the bad guys get smart
and ambitious enough to use millions of reflectors, BCP38 will be so
common that the sending systems can be found and quenched.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] First experiments with DNS dampening to fight amplification attacks

2012-09-24 Thread Vernon Schryver
 From: paul vixie p...@redbarn.org

 first, since it does not take the query or response into account, all
 queries from a given source will share fate. this means your authority
 server will go completely silent on some recursive name server if it
 sends too many of any kind of query no matter how diverse those queries are.

I would emphasize a different aspect of that issue.  DNS Damping, like
the firewall based schemes, allows a bad guy to silence a DNS server
as far as a block of IP addresses is concerned.  Silencing DNS has
long been an important part of various security attacks.


 second, you go completely silent when in dampening mode. there are no
 slip responses by which an actual recursive name server might be able to
 get real answers by retrying with UDP or escalating to TCP during times
 that its IP address is being spoofed by an attacker.

I think that is a feature of DNS Damping intended to answer the complaint
about DNS RRL sending a constant data stream of attack packets.
In my biased view that is a misfeature based on failing to apply
the same DDoS scenario to DNS Damping.  (See below.)


 third, you're giving each end-host address its own fate, so that a
 spoofed-source attacker could cause you to flood a distant network
 simply by iterating through that network's address space.

There are references to dealing with blocks instead of individual
addresses.  Perhaps that is intended for a future version.


 your solution seems to be optimized for overly busy recursive servers
 who you want to deny excess service to, and does not deal at all with
 the case of spoofed-IP reflected amplified attacks.

How so?


 i also note that you have misunderstood (and therefore mischaracterized)
 DNS RRL, according to this text from your web site:

http://lutz.donnerhacke.de/eng/Blog/DNS-Dampening

  They can rate limit http://ss.vix.com/%7Evixie/isc-tn-2012-1.txt the
  queries per client. 

DNS Damping *is* rate limiting.  It differs from DNS RRL only in
details about counting and limiting rates.  (One detail that I think
is important but that Lutz Donnerhacke evidently does not is that
RRL does not count *queries*; RRL counts *responses*.)


  Unfortunly this generates only a constant data
  stream of attack packets. DDoS works well with limited data rates per
  server, if you misuse enough servers. On the other hand the
  implementation required a lot of ressources.

 this text contains two factual errors: (1) that DNS RRL generates a
 constant stream of attack packets: we attenuate the attacks in two ways,
 first by dropping most (or at worse half) of the responses, second by
 responding with TC=1 packets that are no larger than the requests; and

I think the complaint is that DNS RRL with slip 0 and the recommended
responses-per-second 10 could send 10 DNS response/second to the
victim.  If the responses were DNSSEC ANY or NXDOMAIN results, they
could be more than 1500 Bytes or 1.2 Kbits each.  If you stop there,
it sounds bad, because a bad guy could use 833 DNS servers using
RRL to reflect 120 Mbit/sec to the victim.

But don't stop there.  Ask what happens if those 833 DNS servers
use Damping instead of RRL.  I think the same 120 Mbit/sec would
be sent to the victim because no rate limiting including Damping
should trigger on a busy server at much less than 10 requests/second.


 (2) that DNS RRL uses a lot of resources: we use about a megabyte of
 storage to keep unique state for 5 queries per second for five
 seconds, which is trivial.

I also wonder about that claim that DNS RRL requires a lot of resources.
From my superfical reading, appears to me that DNS Damping uses more
resources than DNS RRL.  What kind of resources are we talking about,
memory or CPU cycles?  How was resource utilization measured in the
two implementations and what are the numbers?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS RRL light?

2012-09-14 Thread Vernon Schryver
 - Rate limit clients to 100 qps. Drop for 5 mins if exceeded.
 - Rate limit client to 5 identical queries per second. Drop for 5 mins
 if exceeded.

 Any logical errors, or other errors, you see there? Also, any, simple
 to implement, enhancements you could add?

The first rule is to do whatever works in your situation no matter
what outsiders say.

Are you counting identical queries including qtype as well as qname?

Are you dropping identical queries or all queries?

Counting all queries might not work on a server for a popular domain,
because there can be a lot of legitimte queries from a single carrier
grade NAT IP address.  Even counting 5 identical queries or responses
might cause problems for a sufficiently popular web site.

Counting identical queries instead identical responses might let a
bad guy reflect a stream of 1500+ Byte NXDOMAIN responses using
a stream of queries for unique bogus domains.

Blocking at 5 identical queries per second sounds reasonable to me,
but blocking for 5 minutes sounds far too long, because it might
unnessarily drop legitimate queries.  A 5 minute window means that
on average it will be closed 2.5 minutes after the attack stops.
If your scheme can react to the first 5 identical queries in a
second, why not block for only 10 or 15 seconds?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS RRL light?

2012-09-14 Thread Vernon Schryver
 From: Mohamed Lrhazi ml...@georgetown.edu

 I am counting query_type+query_name, implemented as:
 set qhash [b64encode [md5 $q_type:$q_name]]
 I guess that's my state blob. Is that good?

If possible, I'd use a bit hash function with lower CPU costs, although
speed might matter.  However, it sounds as this is all done as in shell
scripts, and so it's not expected to handle many queries per second.

Any good 128-bit hash will have no more collisions than a 128-bit
cryptographic hash function and might have fewer.  Contrary to ancient
superstition, cryptographic hashes have no fewer or better distributed
collisions than any other good hash function.  Cryptographic hashes
are only supposed to be hard to reverse and their collisions are
supposed to be hard to predict.  Those characteristics and their mundane
collision characteristics are based only hope and a trivial number
(compared to 2**128)of tests instead of mathematics like that behind
other hash functions such as cyclic redundancy checks.

If the IP address is handled elsewhere, any reasonable (not necessarily
'good') 32-bit hash such as `sum` or `cksum` should also have no
collisions that matter, be a lot faster, and need fewer bits.
The size of the state blob matters if you want to handle lots of
queries/second and so need to store window size blobs;
(10 seconds)*(100K blobs/second)=1M blobs.

The key for each BIND9 RRL state blob consists of
   - 129 bits of IP address.  Perhaps embedded IPv4 addresses would
  work, but this time I chose a separate IPv4/IPv6 bit.
   - qtype
   - simplistic 32-bit hash of qname to avoid a fixed size 256 byte
   buffer or worse, malloc
   - DNS class compressed to one bit for now
   - whether the response is an error, NXDOMAIN, or normal
   - whether TCP was used.
Most of the BIND9 RRL blob consists of other stuff including links,
counters, and timers.


 I am also dropping everything, during the drop window, as I did not
 want to keep the query info for too long, but since I will lower the
 window, it might be feasible.

Dropping everything increases the likelihood of dropping legitimate
requests, and so is another reason to make your window as short as
possible.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DoS with amplification: yet another funny Unix script

2012-09-12 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at

 I don't think diffuse is the right word - this kind of attack can be
 very intense. 

agreed, it's only diffused among qnames and qtypes.

   If you have a large domain signed with NSEC it's trivial for
 an attacker to enumerate the domain, and RRL will not treat this as an
 attack. Or of you are a large scale DNS hosting provider the attacker can
 get a list of domains you host from copies of TLD zones. Having got a list
 of names, the attacker can then reflect lots of traffic via your server
 which will be treated as OK by RRL.

It would be easy to change the RRL patch to have yet another optional
rate limit counting all non-error responses to an IP address block
if they were the same.

It would have some negative aspects:

  1. Under that kind of attack, the TC=1 slipping is worse than useless,
and so it would not trigger the TC=1 responses.

  2. Targets of the reflection attack would get no DNS service at all
unless they magically know to switch to TCP.

  3. One can argue that this kind of defense belongs in a firewall
that understands nothing about DNS except rate limiting based
on source IP address and destination port 53.

  4. It would double the memory spent on counting responses.
The amount of memory required to count responses on very busy
(10K or 100K qps) DNS servers has always been a concern.
It is why the RRL patch saves a 4-byte hash of the qname instead
using a 256 byte block (or worse, dynamically allocating space
for each qname).  However, it's only a factor of 2.

I use the argument of #3 to respond to observations about the high
costs of DNS/TCP and objections to TC=1 slipping.  At sufficiently
high rates, a DNS/TCP DoS attack looks like TCP SYN flooding.  TCP SYN
flooding is commonly handled without bothering the application and
without allocating or timing-out a TCB in the either kernel or a
firewall.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY record queries - Reflection Attacks

2012-09-11 Thread Vernon Schryver
 From: Eric Osterweil eosterw...@verisign.com

 So, can I just make sure I understand the RRL idea?  If, under
 non-attack circumstances, I get a traffic rate of `r' from a given
 subnet, but an amplification attack sends me `99*r' (causing a total
 traffic rate of `100*r'), then I should rate limit?  So, my back of
 the envelope calculation says that I will reward the attack traffic
 over the non-attack traffic.  That is, if I limit the response rate
 back down to `r', then I will drop 99/100 responses to reach that
 target.  My legitimate client (subnet) has only about a 1/100 chance
 of getting each query answered here (all other response slots are given
 to my adversary)...

That computation might be correct if DNS clients did not retransmit,
if the BIND RRL idea involved only discarding responses,
and if Paul and I proposed dropping 99% of all traffic for a CIDR block.
We advocate none of that.

We propose dropping only identical responses to a given CIDR block
instead of all responses.

The BIND RRL code has a notion of slip or responding while rate
limiting with TC=1.  It has a default slip rate of 2, or responding
with TC=1 instead of dropping every other identical response.

A DNS client that retransmits N times to a DNS server that answers
50% with TC=1 of the time will get an answer to 1-(0.5)^N of its
queries.  For N=4, it will get a TC=1 answer 94% of the time.


 I think rate limiting is kind of the wrong direction.
 Did I misunderstand some aspect?

What do you think would be the right direction?  Doing nothing is
not acceptable.

We think that rate limiting is only a work around for the failure
of the responsible parties to implement BCP 38 or other effective
mechanisms to stop the abuse the transmit on behalf of their users.
In the distant future we hope it won't be needed.


 Also, when you say, ``shockingly effective,'' how can we measure
 effectiveness, in order to verify the approach?

One way to measure the effectiveness of a defense is to compare the
work the bad guy must do with the benefit to the bad guy.  In this
case, rate limiting at 10 identical repsonses and using the default
{slip 2;} means that in common scenarios, the amplification is less
than 1.  The bad guy gets less result from a reflection DoS attack
than a direct DoS attack.  Under the circumstances, I think that
is effective.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DoS with amplification: yet another funny Unix script

2012-09-11 Thread Vernon Schryver
 From: Klaus Darilion klaus.mailingli...@pernau.at

 On 10.09.2012 19:48, Paul Vixie wrote:
  please don't do, or promulgate, this. ddos filtering in order to do more
  good than harm has to be based on the attack's answer, not on its query.

  vernon schryver and i explain this in the technical note at
  http://www.redbarn.org/dns/ratelimits/.

I fear that the technical note linked from that page fails to emphasize
enough the drawbacks of firewall defenses against DNS reflection
attacks.  I was recently asked about a proposed security advisory about
some consequences of using iptables to defend against DNS reflection
attacks.  My response was that it's not entirely the fault of the DNS
server if a user's iptables facilitate denying DNS service.  As I think
that technical note says, any rate limiting has some danger of being
exploited by bad guys, but simplistic firewall 'deny' rules are far
too blunt and much too easily exploited.  Any firewall rule that doesn't
compute DNS responses about as good as a DNS server is simplisitic.
If your firewall is smart enough to know that a stream of DNS requests
will generate 1600 byte NXDOMAIN responses, why don't you turn off the
DNS server and let the firewall do all of the DNS work?


 Is it correct that filtering based on responses is better only for 
 NXDOMAIN responses or error responses. If the forged requests are 
 requests for valid domain names, then analyzing the request should be 
 fine too.

Because each request implies its response, in theory there is no
difference between filtering based on requests or responses.  In
practice it is impossible for anything except a DNS server to know the
response that a request will generated.  How can anything but the DNS
server or its doppleganger know that a request will generate NXDOMAIN,
SERVFAIL, REFUSED, or be dropped by an ACL in the DNS server?  How can
anything but the server know about variations in non-error responses
due to views?

There are few reason to forge requests that reflect as SERVFAIL or
REFUSED, but you can get large amplifications with DNSSEC NXDOMAIN.
If your defense works against non-error reflections attacks but not
NDXDOMAINs, what will your adversaries do and how quickly?

The party line that the BIND RRL only looks at responses is not entirely
accurate.  For obvious efficency reasons (and ease of coding), it looks
at the RRsets or error code that would be used to generate the response
instead of the final, on-the-wire response.  Analyzing cooked instead
of raw DNS responses is related to my main question about some of the
suggested firewall schemes.  How can you convert every incoming DNS
packet to text and then run it through awk or grep and handle a
non-trivial load?


 Of course it is easiest to track the states in the name server, but I do 
 also see advantages in doing the filtering in an external node (upstream 
 firewall, local iptables). For example I do not have to worry about the 
 used name server software and I can use the same rules for bind, 
 powerdns, nsd ... backends. I also suspect that filtering in kernel 
 space may be faster. As a personal preference I try to separate 
 functionality into dedicated nodes.

That's an argument for all reasonable DNS server implementations having
rate limiting knobs instead of a reason for bad and dangerous rate
limiting in firewalls.  As Paul said, the BIND RRL code is open.

Why not remove all ACLs, views, and related mechanisms from your DNS
server and put them in your IP firewall?


 The tuple mask(IP), imputed(NAME), errorstatus is used to select a 
 state blob. In the amplification attacks on our authoritative servers we 

 Thus, it may take some time until the attacker starts with domain1.com 
 again. If I understand the Responder Behavior correct, this would mean 
 that filtering is never triggered if a domain is not queried 
 RESPONSES-PER-SECOND times per second. Or do I miss something here?

I'm not sure I understand.  If that points out that an attack that is
too diffuse to be noticed by the BIND RRL code might be noticed by a
firewall rule, then I agree.  I'd also say that can be seen as a feature
instead of a defect, because during less diffuse attacks, legitimate
requests from the forged CIDR block will still be answered.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DoS with amplification: yet another funny Unix script

2012-09-11 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net

   Any firewall rule that doesn't compute DNS responses about as good as a 
  DNS server is simplisitic.

 With the greatest of respect; that thinking is itself simplistic.
 Where I work we concentrate on writing very good firewalls. Sometimes
 these rules even have to parse DNS, just as the DNS server must ...

That just as the DCC server must is false.  For example, I doubt
that those firewalls do enough DNS computing to recognize and limit a
stream of responses generated from a single wildcard before the responses
have been transmitted by the DNS server.  They probably doesn't even
recognize pernicious but simple NXDOMAIN cases.  They might but probably
don't notice that a stream of responses are approximately identical
referrals from authoritative servers or approximately identical recursion
from recursive servers.  I think DNS rate limiting must do all of that
while not slowing other high volume traffic.


 An in-the-path firewall actually has access to more data than the DNS
 server alone does. For example, it can build up a simple profile of
 expectation values for IP TTLs on a per-source network basis.

That is also overstated.  In practice DNS servers don't do such things,
but they could.   (Yes, you can get UDP/IP headers in a modern BSD
UNIX daemon.)  I doubt that the computing costs of tracking IP TTLs
would be worthwhile for a DNS server with high legitimate load.  I
wonder if administrative costs such as dealing with the false positives
due to route flapping or re-homing would be worthwhile even in a
firewall.  Remember the best of all firewalls that is advetised on
http://www.ranum.com/security/computer_security/papers/a1-firewall/


   It can
 use all IP data for that profile; DNS, HTTP, whatever it's seen. Those
 expectation values can then be used to detect and reject spoofed
 packets, in combination with other statistical scores. That's just one
 simple example - there are many more.

Firewalls have good and valuable uses in lower layer defenses.  However,
firewalls are usually weak crutches for applications.  They are very
popular for quick plugs in application holes, because badly designed
and written applications are so popular and it's a lot easier to kludge
something into a firewall than fix the typical lame application code
implementing a worse than stupid de facto standard protocol.

Even good protocols have weaknesses.  For example, every protocol and
especially those using UDP must have basic features including:
   - optional authentication  authorization
   - exponential or steeper backoff for retries
   - rate limiting on requests from evil as well as innocently broken clients
The original DNS lacked all of those.


 The other big reason is pragmatism; unix daemons using recv() are
 extremely limited in the rate at which they can process packets. far
 far higher throughput is possible via other techniques that involve
 handling batches of packets at much smaller timescales. A nice benefit
 of the approach is that it frees higher-level development teams from
 having to worry about low-level mitigation, and that the work is
 re-usable across many products. During real attacks, if a packet makes
 it to the dns server, the game is already lost.

I disagree with most of that.  Since it is about general philosophies
and what is theoretically possible instead of operational issues or
even DNS theories, I'll resist the impulse to pick it apart.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY record queries - Reflection Attacks

2012-09-11 Thread Vernon Schryver
 From: Eric Osterweil eosterw...@verisign.com

  That computation might be correct if DNS clients did not retransmit,
  if the BIND RRL idea involved only discarding responses,
  and if Paul and I proposed dropping 99% of all traffic for a CIDR block.
  We advocate none of that.

 Hmm.. I may still be missing some nuances, what are the specifics?

Instead of my writing a new complete description or even worse for other
readers, copying-and-pasting the existing documentation,
please see the recently mentioned http://www.redbarn.org/dns/ratelimits
That page has a link to a technical note talking about the problem,
a link to code for serious specifics,
and even a link to draft changes to the BIND9 ARM.


 So, I don't understand something... If you see a lot of identical
 responses from an authority, could that not be because it is an authority
 for those responses?  How do you distinguish a netblock with multiple
 resolvers, or anycast resolvers? 

The BIND RRL code is part of the resolver.  It does not see a lot of
identical responses from an authority except when it is the authority.


   Perhaps more directly, are you
 dropping responses from legitimate clients and how do you feel about
 them being collateral damage?

Paul Vixie and I are not advocating DNS rate limiting in firewalls.
We're talking about rate limiting in the hosts at the ends of the
intertubes.


 So, every identical response either gets dropped or gets its TC bit set?

No, every *excessive* identical response is either not sent (dropped)
or a tiny TC=1 response is sent instead.


  A DNS client that retransmits N times to a DNS server that answers
  50% with TC=1 of the time will get an answer to 1-(0.5)^N of its
  queries.  For N=4, it will get a TC=1 answer 94% of the time.

 Wait, I'm very confused... The above sounds like you respond to
 94% of the reflector attack queries (which furthers the attack).

No, I was pointing out that P(R1R2R3R4)=P(R1)*P(R2)*P(R3)*P(R4)
Given a uniform drop probability of 50%, the probability that all 4
responses to an initial request and its 3 retransmissions will dropped
is 6%.  (Or should that be N=5?--I always seem to be off by 1.
In which case 97% of requests would be answered.)


 Well, if doing something hurts the legitimate clients more than doing
 nothing, I think you need to be upfront about that.  I think that's
 worse than doing nothing.

That's like opposing mechanical spam filtering by pointing to mechanical
false positives while ignoring the higher false positive rate of the
otherwise inevitable purely manual filtering on subjects and senders.
If you do nothing, then legitimate clients will be denied all service
by the firewall rules advocated here or by IP bandwidth rate limits
at the source (DNS servers) and the DoS targets.  Remember why it's
called a DoS.

You are saying that you would rather try to receive 1000 1500 Byte
bogus DNS responses per second along with all your legitimate DNS
responses that don't get dropped from router queues by that flood
instead of 10 bogus responses and useful responses to 94% of your
requests.


 OK, but you've also almost certainly eliminated the legitimate
 client's ability to query you for responses.

That is simply false.  When Paul Vixie wrote that the BIND RRL code
is effective, he wasn't talking about theory or small scale tests.  It
has been in use on some major DNS servers for months.  If there were
enough collateral damage to talk about, someone would have complained.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS ANY record queries - Reflection Attacks

2012-09-11 Thread Vernon Schryver
 From: Eric Osterweil eosterw...@verisign.com

 Fair enough, except I'm pretty sure some of the deployment being
 talked about (even in this thread) is at the authority (not the
 resolver)...  

  Paul Vixie and I are not advocating DNS rate limiting in firewalls.
  We're talking about rate limiting in the hosts at the ends of the
  intertubes.

 Again, this thread started somewhere else.  Clearly, I agree that
 people should be able to manage their own user experiences. ;)

By definition, all DNS servers and clients run on hosts.  When a
router or bridge does DNS stuff, it is a host and subject to the
Host Requirements RFCs.  I and I think some others have been talking
about filtering DNS requests and/or responses in the following hosts
and/or firewalls close to those aforesaid hosts:
   - DNS servers, either authority servers or resolvers
   - putative DNS clients that are targets of DNS reflection DoS attacks.

The money-back warranty on the BIND RRL patch only covers its installation
on authority DNS servers, although I have received positive reports
from resolver operators.  The current version includes additional
features suggested by operators of combined authorities/resolvers.
  


 1 - If you uniformly drop 50% of a 100x amplification attack, you
 are still reflecting 50x amplification, right?

That is wrong for the BIND RRL patch.  With default parameters, the
BIND RRL drops 50% of responses and substitutes a small TC=1 response
for the other 50%.  That gives an amplification for the responses it
sends of = 1.0 responses sent and an overall amplification = 0.5.
(It currently forgets about ENDS in the TC=1 responses giving a default
amplification of  0.5.  An influential commentator calls that a bug.)


 2 - If you wait for (say) 4 responses, your stub (the client
 driving the upstream resolver) has almost certainly timed out, and
 the DDoS has succeeded, if I'm not mistaken, right?

I think that is mistaken.  Consider the implications of the default
values of attempts and timeout keywords in /etc/resolv.conf and
of _res.retry and _res.retrans the libc interface to the de facto
standard stub.



 Then it should be easy enough for someone to explain the above, no?
 Having deployed something does not mean that it was effective, and
 blocking traffic does not tell me how much legit traffic and how much
 attack traffic was blocked.  I don't see why this is so hard, I just
 want to understand the assertion.

Consider where the BIND RRL patch has been installed and then ask
yourself why *you* have not noticed any collateral response losses.
See https://lists.dns-oarc.net/pipermail/dns-operations/2012-June/008453.html


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Research Project: Identifying DNSSEC Validators

2012-09-06 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Matth=E4us_Wander?= matthaeus.wan...@uni-due.de

  I assume I'm odd, because I'm not eagar to put the invisible HREF
  anchor on my web pages because of the extra DNS transactions imposed
  on users.  I also have vague worries I can't articulate about privacy
  concerns.
 =20
  My answer to putting a simple IMG beacon on my web pages would
  be a flat never.  There are too many technical and legal issues.
  For example, what about privacy issues with the referer string?
 =20

 Can't argue with that. If privacy is an issue, you won't become friends
 with foreign HTTP resources.

I don't understand that.  Whether an HTTP server is foreign or domestic
(for any value of domestic) does not by itself determine its
trustworthiness.  I start by assuming any HTTP server is untrustworthy,
but that doesn't imply that I should involve third parties.

The privacy issues I meant involve the third parties counting DNSSEC
aware resolvers.  The commercial hit counters also claim to be
trustworthy, even as they sell their measurements.  I assume that none
of you guys would do something like correlating referer strings, your
results, and WHOIS or other e-appended values to send email to web
masters offering to sell better DNS resolver software.  I also assume
that if a financial institution put your beacons on their TLS web
pages, none would try to 'leverage' the resulting referer, weak DNS
resolver, and IP address data.  And so forth and so on including other
attacks I can't imagine.  However, a security policy based on assumed
good intentions is incompetent.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Side effects of enabling DNSSEC?

2012-08-03 Thread Vernon Schryver
 From: Mohamed Lrhazi ml...@georgetown.edu

 I am learning DNSSEC and was wondering if there was any side effects
 to enabling DNSSEC on a domain, if there were mistakes in the
 configuration?

 In other words, if I were to enable DNSSEC on a zone, and miss something,
 could I effect anything other than DNSSEC validation itself? and if I did
 effect it, how bad would that be? and also, how would you go about testing
 that everything is working fine once enabled?

If the signatures don't work, then resolvers that pay attention to
DNSSEC will answer requests for your DNS records with SERVFAIL.

 I guess I should ask the same question about side effects when there are no
 configuration mistakes at all :) Should I expect anything to break because
 now DNSSEC is enabled and working?

More stuff always means more stuff that can and so will break.

I think that there are better questions:
1. Will you ever enable DNSSEC on your domains?
2. If so, should you do it now or later?

#1 is for you to answer.
If your answer for #1 is yes, then the answer for #2 is that you
should join the DNSSEC party yesterday or today at the latest.
Because current versions of BIND pay attention to DNSSEC by default,
ever larger fractions of the Internet and eventually most of it
(well, outside the jurisdictions of 'authoritative regimes')
will penalize DNSSEC errors with SERVFAIL.  I suspect that today only
a large minority the Internet does that, and so now is the time to learn
from the inevitable mistakes.  Today you might need to use `dig +dnssec`
or `dig +adflag` to see the effects of DNSSEC.  Tomorrow you will need
to use `dig +cdflag` to not see them.

For example, I get a long delay and then SERVFAIL for
`dig www.dnssec-failed.org` from a resolver with the BIND9.9 default
for dnssec-validation.  A real life instead of artificial example
also found on http://www.dnssec.comcast.net/ is the quick SERVFAIL that
I get from `dig usbountyhunters.com`


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


[dns-operations] DNSSEC, IPv6 glue, multiple DNS servers, and eating your own dog food

2012-07-20 Thread Vernon Schryver
An obvious filter for prospective registrars occurred to me at this,
nearly the end of my week long efforts to get my trivial domains
signed.

A registrar that does not have DS records for its main domain names
might lack experience dealing with DNSSEC registrations.

A registrar whose main domains lack  records for any NS names might
lack real world information about IPv6 glue.

A registrar or reseller that does not have have a WHOIS record with
a minimal set of servers or at least NS records might lack empathy
for those of us who think such things are a good idea.

You don't need to ask people at the registrar.
`dig example.com ds`, `dig example.com `, and `dig example.com ns`
can give more authoritative answers than anything people might say.

Its funny but not amusing that those commands give better results
for the unreal example.com than for any of the registrars that I
recall being mentioned here recently.  This should be particularly
embarrassing for one of them.

I tried several other registrars on
http://www.dotandco.net/ressources/icann_registrars/details/position.en
and found *none* that could pass that trivial filter.
Talk about a race to the bottom!


Vernon Schryverv...@rhyolite.com

P.S. eating your own dog food is not an insult but the old programmer's
motto about using the stuff that you would foist on others.

P.S. The imperfections in ARIN's reverse DNS web page are consistent
with the lack of DS RRs for arin.net.  ICANN passes.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] thoughts on DNSSEC

2012-07-18 Thread Vernon Schryver
 It seems to me that registrants are in a position to make
 determinations about the extent of a regisrtar's actual support of
 DNSSEC on the basis of their interface support and the claims of their
 support staff. 

How does a prospective customer check a registrar's interface without
doing something approaching reality like registering a throw-away name?
The costs in time and hassles of that are a barrier.

Worse, not only could send mail be better in theory than web forms,
you can't tell whether web forms are sugar coatings on dung worse than
typical send mail without a lot more than an initial registration
and twiddling.  For example, from an initial registration I wouldn't
have known about GoDaddy's understanding of permission to try to bill
credit numbers they happen to have for wonderful new services like
alert mail boxes (nothing to do with SMTP, IMAP, or POP).

For another example, Opensrs/Tucows claims to support IPv6, but
when I last tried an IPv6 address in the store front web forms
somehow provided to my reseller by Opensrs/Tucows, they choked.
You win if you bet that the Opensrs answer was send mail with your
glue to support.  I blame my reseller for not simply diving in and
fixing the web form parsing, but not more than I value past personal
attention and advocacy inside the registrar.


 The ICANN pages are surely a good place to start, but
 only to start.

I think that overstates the value of the ICANN page.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] question for DNS being attacked

2012-06-28 Thread Vernon Schryver
 From: Michael Graff mgr...@isc.org

 It may also make Kaminsky style attacks easier if an attacker can
 blind an auth server from handing out responses.  If the counter values
 are real from the RFC style paper, every other response becomes a
 truncated reply in a flood situation.  This will extend the attack
 window by  the time it takes to establish a TCP connection and query,
 or to the time it takes to retransmit the query plus TCP handshake if
 the blinding is successful.  This assumes the second query works.  But
 in reality it has the same chance as the first.

 Assuming about 100ms for TCP handshake and two seconds for timeout
 and retry followed by the TCP handshake, This means the window for
 potential false responses moves to about 1100ms on average.

 If a UDP reply would normally make it to the server in 100 ms, this
 opens a window 11 times as wide.

That conclusion does not hold, because it does not define the narrow
window alternative.  11 times as wide as what?

Causing a DNS server to stop answering your target is a security issue,
but the alternative to rate limiting based on (IP,qname,type,class)
is not infinite bandwidth.  If DNS server software does not rate limit
its own answers, then its answers will be rate limited by bottlenecks
in the path between the server and the client.  This is true whether
the bottlenecks are in firewalls or routers.

To prevent an authoritative server from answering your requests, I
need only trigger the rate limiting between it and you.  If the rate
limits are based only on bandwidth, then I will cause it to try to
send more than that limit.  Before botnets and DNSSEC, that often would
have been difficult.  Today I can send as many requests/sec as I want
and I can use DNSSEC to make the responses at least 20 and often 50
times larger than the requests.

Consider choices for IP bandwidth limits.  Because of the 20X-50X
amplification of DNSSEC, aren't ipfw/iptables limits likely to be
correspond to about the values that you would choose for (IP,qname,type)
limits?  What is the difference in a security attack whether an auth
server is silenced by random DNSSEC requests that overrun its output
bandwidth limit or its (IP,qname,type) limit?  I see these differences:

  - Depending on the security attack, the attacker might not know
 the qname.  In that case, the attacker must fall back to
 exploiting bandwidth limits.
 
  - During the attack, the server might still answer other queries 
 if it uses (IP,qname,type) limits.  Another way of stating
 this difference is that relying on bandwidth limits to stop
 participation in DoS attacks makes it easier for an attacker
 to completely silence a DNS server in other kinds of attacks.

  - (IP,qname,type) rate limiting makes it less likely that your
  DNS server will be used as part of a DoS attack.

(IP,qname,type) rate limiting is not a silver bullet and IP bandwidth
rate limiting using fair or any other queue discipline is not magic
pixie dust.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] question for DNS being attacked

2012-06-28 Thread Vernon Schryver
 From: Michael Graff mgr...@isc.org

  That conclusion does not hold, because it does not define the narrow
  window alternative.  11 times as wide as what?

 With a slip factor of 2, every other packet will be dropped, and the =
 other packets returned will have the truncated bit set.  If this is =
 incorrect, please explain what it does do.

 This translates to a normal truncated response 50% of the time, and a =
 timeout the other 50%. 

That is true only if you modify the patch to accept a rate limit
threshold 0.  Otherwise the truncated responses and timeouts happen
only during attacks involving flooding, including what I assume are
meant by spoofing flood attacks.  I don't think running with a rate
limit threshold 0 makes any sense, but I've often been called lacking
in imagination.

If I should have understood normal as normal during a spoofing
flood attack, then we can talk about what happens with IP instead 
of (IP,qname,type) rate limiting and the thresholds for particular
instances of the universal IP rate limiting.


 Ignoring the penalty BIND 9 and other servers =
 are likely to assign to this misbehaving server, the timeout keeps the =
 Waiting for a response window open much, much longer. 

Again, longer than what?  The alternative is *not* an absense of rate
limiting and so an absense of windows during which the target is
prevented from receiving responses from the real authoritative server.
As I said before, whether one uses (IP,qname,type), IP bandwidth rate
limiting or both, the target can be prevented from getting response
from authoritative servers.  The only things that vary are the contents
of the packets used by the attacker to blind the target, perhaps (but
not necessarily) the bandwidth used by the blinding packets, and whether
the thresholds for the rate limiting were consciously chosen to deal
with any kind of DNS issue.  It's a mere implementation detail of the
attack whether the window is held open by the authoritative server's
refusal to answer or by configured or de facto bandwidth rate limits
in the server or the path from the server to the target.

A separate aspect of this supposedly much, much longer window is that
it seems to assume that after the client has received a truncated or
TC=1 response and is going through the DNS/TCP dance, it will still
accept forged, evil DNS/UDP responses from the attacker.  Is that true
of common resolving servers and resolver libraries?  It's not how I'd
write a resolver.  Instead I'd discard all of the state needed to
accept apparently stale UDP responses before the TCP SYN is sent.


 Waiting for a response window open much, much longer.  This timeout is =
 largely server-dependent, and some may wait multiple seconds.  This =
 leaves the window open for a spoofing flood attack to sneak in.

Again, longer than what?

 While I commend you and Paul on the RLL work you've made, I think it is =
 improper to not mention this in the documents you write.  It may be not =
 a big deal to the administrator of the zone, but it is up to them to =
 decide that.  Some may prefer to be a flooding source rather than make =
 their zone more prone to spoofing, even if the actual odds are low.  The =
 biggest problem here is that the zone publisher's goals of not being =
 spoofable are entirely dependent on the resolver asking the questions, =
 without DNSSEC in the mix.

The notion that there is an alternative to rate limits is wrong.
There are rate limits in the path from the server and to the client
whether anyone has consciously configured them below the limits
imposed by service bit rates.  The lowest limit might be in firewalls
near the server or the client or in intermediate router queues, but
it will exist and can often (almost always?) be utilized by an
attacker using the 20x-50X amplification of DNSSEC.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] ok, DNS RRL (rate limits) are officially, seriously, cool

2012-06-25 Thread Vernon Schryver
From: Klaus Darilion klaus.mailingli...@pernau.at 

 Nice. But I wonder why there is a drop-down of outgoing packets
 during an amplification attack. I would expect that outgoing traffic
 is constant. Maybe, in this case also legitimate queries are blocked
 (false positive).

Why would false positives happen only while there are lots of true positives?

This rate limiting scheme is not an automatic IP address or domain
name ACL.  The only likely false positives are legitimte requests both
for the same records as the attack requests and from the same IP as
the forged requests.  If the reduction indicates false positives, then
the bad guys are forging requests that are from real clients and for
the same names, but not by themselves enough to the reach rate limit.
So I think the reduction could be false positives only if the attack
involves a lot of differing client IP addresses and some very popular
names.

Note also that the graphs don't say whether a reduction in outgoing
packets happens during an attack without the rate limiting.  The
reasonable guess is that somewhere in the path from the real and
attacking clients up to and including the server there are bottlenecks
that let the attack hurt legitimate traffic, but we don't know
where.  Maybe the attack blocks legitimate requests in router or
firewalls between the server and legitimate clients.


From: Phil Regnauld regna...@nsrc.org

} That's assuming all other clients are behaving properly in the
} first place, could be a non negligible number of malware generating
} this background noise. Their existence might be revealed by rate
} limitation.

Because this rate limiting scheme is not an automatic IP address
or name ACL, I don't understand how that might happen.  Why would
bad guys be continuing forging about 1 qps for the same clients and
name as during the real attack?

} But yes, it's worth digging. 

Agreed.  However, the obvious test of checking for a reduction in
legitimate responses during an attack would be hard (how could you
tell?) and unsavory.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Why would an MTA issue an ANY query instead of an MX query?

2012-06-23 Thread Vernon Schryver
 From: Florian Weimer f...@deneb.enyo.de

  Emergency patches against ANY to last for a day or two for lack of
  other available tools can make good sense--for a day or so.  But
  spending any long term effort on ANY queries in this context is the
  same thinking that brought us SPF as the final ultimate solution
  to the spam problem (FUSSP), because as we all knew, spam requires
  forged senders.

 But unlike spam, these attacks require spoofed source addresses.

Was I really that unclear?  Of course forged IP source addresses
are a critical part of DNS reflection DoS attacks, just as bulk
is a critical part of spam.  My point is that it is necessary to
pay attention to the necessary aspects of the problem and deal with
those instead of trivial efforts against the current wrapping paper.

From the history of obvious bogus spam FUSSPs such as the many
variations of email authentication and the prove mail sender is
a human unsolicited bulk email (spam) sent to uninvolved third
parties, I predict that the next solution to DNS reflection attacks
after the current disable AUTHORITY and ADDITIONAL sections and
disable ANY will be disable DNSSEC.

Solutions analogous to know your customer before allowing outgoing
bulk connections to TCP port 25 such as disable or restrict open
recursive DNS servers to known users or even install response rate
limiting DNS software (not to mention BCP 38) are resisted as too
hard.  The saving grace is that the monetary rewards for allowing DNS
reflection attacks aren't as large as those for allowing unsolicited
bulk email.


 Perhaps it's time to admit defeat, call our legislators, and suggest
 that they mandate source address validation by service providers.

Speaking of easier non-solutions that would not only not solve the problem
but create worse problems ...

On the other hand, if service providers were liable for damages
caused by forged IP source addresses (or forged SMTP envelopes) ...


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] dns response rate limiting (DNS RRL) patch available for testing

2012-06-12 Thread Vernon Schryver
 From: Ken A k...@pacific.net
 To: dns-operati...@mail.dns-oarc.net

 On a authoritative + recursive server, instead of a separate view, we use:
 acl trusted { x.x.x.x/z; };
 allow-recursion { trusted; };

 Is there any way to apply this patch so that it does not affect a 
 specific acl, such as trusted addresses?

 Or, is it recommended/required that we configure separate views to use this?

Separate views are required to apply rate limiting to some but not
all DNS clients, unless you are of the school that holds
authoritative+recursive servers are always utterly wrong.  In that
case separate servers are required.

Would it be easy to transform your configuration file to use views via
the include directive?  My named.conf files look something like

view insiders {
match-clients { goodnets; };
recursion yes;
include privatezones;
include publiczones;
response-policy {
...
};
};
view outsiders {
match-clients { any; };
recursion no;
include publiczones;
rate-limit { ... };
};


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Why would an MTA issue an ANY query instead of an MX query?

2012-06-12 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at

  Yes, how is BCP 38 deployment going?

 Someone on NANOG recently mentioned http://spoofer.csail.mit.edu/

http://rbeverly.net/research/papers/spoofer-imc09.html
and the last slides in
http://rbeverly.net/research/papers/spoofer-imc09-presentation.pdf
suggest that relying on BCP 38 deployment is unsound.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DDoS botnet behaviour

2012-06-11 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at
 To: Vernon Schryver v...@rhyolite.com
 cc: dns-operati...@mail.dns-oarc.net

 The reason I'm basing my work on a Bloom filter is to avoid any per-client
 scaling costs. There's a fixed per-packet overhead, a fixed memory cost
 (which should be scaled with the server's overall load), and a fairly
 cheap periodic cleaning task. No dynamic memory allocation.

How many hash functions are you using, what are they, and how do you
know that they are sufficiently independent to give a tolerable false
positive rate without using as much RAM as a single classic hash table?

I hate dynamic memory allocation, and so it doesn't happen in my code
after it reaches steady state.  The operator can use an optional
parameter to set initial sizes to minimize the cold start problem.

What is your estimate for the memory required for a simple hash
based scheme?  Memory size even on the busiest servers was a primary
design goal for my code and I was told to handle 100K qps.  Since
another goal is to stop dropping responses the instant the forged
requests cease to avoid being a DNS Dos for the target, the rate
must be computed with only very recent experience (which eliminates
the ancient BSD alpha decay function).  That means that one doesn't
need more than a (very few seconds)*(100K qps)*(memory per response).

I don't think much of timers, and so currently my code has none.
My rate counters are what might be called self-clocking.  I think there
will eventually be a timer for logging, which should save a lot of
cycles when logging is used and cost nothing when logging is not used.


 Your operational criticisms of the probabilistic approach are quite
 correct. It may also turn out to cost too much to get an acceptably low
 false positive rate.

Do you believe that single set of Bloom filter parameters can serve
an interesting class of installations without using more RAM than a
classic hash function?  To understate it--I don't.

How does one learn the false positive rate on an operational system
without running a deterministic rate limit system in parallel?
If you can make the false positive monitor work well enough to be
used everwhere it must be used including at 100K qps, what's the
point of the Bloom filter?


 But, it might be worth putting a smallish Bloom filter in front of an
 accurate traffic accounting system, so that the server only needs to spend
 resources tracking the heaviest users, along the lines described in
 http://pages.cs.wisc.edu/~estan/publications/elephantsandmice.pdf

Bloom filters are a neat idea for the right jobs.  I also think
that the 2 or 3 (not to mention more) hash lookups in a Bloom filter
would use far more CPU cycles per query than needed for an accurate
accounting system.  When not logging and after reaching steady
state, my code only hashes the request, changes pointers at most 3
linked lists, increments a counter, and tells the caller in client.c
or query.c to continue or drop.  Isn't that about what each of what
2 or more Bloom filter would hashes do?

Bloom filters combine several poor hash functions with a single,
extremely small (in context) bit array to give answers that are
imperfect but amazingly good.  That's cool, but not a mandate for
replacing all other hashes, trees, tries, etc.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] Why would an MTA issue an ANY query instead of an MX query?

2012-06-11 Thread Vernon Schryver
 From: Tony Finch d...@dotat.at

 I think it's wrong to focus on ANY queries: restricting them just
 encourages the attackers to move on to another query type. For a domain
 with DNSSEC you get almost as much data in return to an MX query - 2KB vs
 1.5KB for cam.ac.uk.

Today I see 2232 byte responses for another type from the authoritative
servers for another domain often discussed in this context.  That
obvious type is not TXT, SPF, MX, or anything else that might be
deleted, deprecated, shrunk, compressed, moved to an apex, or whatever.

ANY queries might be of little use to computers, but I find them useful
while chasing DNS problems.

Emergency patches against ANY to last for a day or two for lack of
other available tools can make good sense--for a day or so.  But
spending any long term effort on ANY queries in this context is the
same thinking that brought us SPF as the final ultimate solution
to the spam problem (FUSSP), because as we all knew, spam requires
forged senders.  That analogy goes farther than one might realize,
because some of the ANY solutions I've heard include analogs of
the amazingly uninformed and wrong headed SPF re-invention of SMTP
source routes.


Vernon Schryverv...@rhyolite.com

P.S.  I know the current line is that SPF is not and never was a FUSSP;
that doesn't change what was said at the time.  I also know that DKIM
has some real operational value, despite the fact that plenty of
unsolicited, objectionable bulk email advertising is delivered with
valid DKIM signatures.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


  1   2   >