Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2014-01-15 Thread Hannes Frederic Sowa
Hi!

On Wed, Jan 15, 2014 at 01:26:20PM -0800, Colm MacCárthaigh wrote:
 Unfortunately I can't share data, but I have looked at a lot of it. In
 general, I've seen TTLs to be very stable. Most ECMP is flow-hashed these
 days and so as long as the path is stable, the TTLs should be identical. If
 there's some kind of transition mid-datagram, the the TTLs may legitimately
 mismatch, but those events seem to be very rare.

Counterexample: Linux does not use flow-hased steered ECMP. You see the
effect on end-hosts because of the route lookup caching in the socket
(as long as it doesn't get invalidated or unconnected).

The problem is that as soon as such a knob is provided people could
generate DNS-blackholes (until timeout of resolver and retry with TCP,
maybe this could be sped up with icmp error messages).  Only a couple
of such non-flow-hased-based routed links would suffice to break the
internet for a lot of users. I am pretty sure people will enable this
knob as soon as it is provided and word is spread.

If we want to accept that we could just force DF-bit on all fragments
and ignore the users behind some specific minimal mtu. Would solve the
problem more elegantly with same consequences. And error handling with DF-bit
is better specified and handled by the kernel, thus more robust and better
debugable (in case UDP path mtu discovery is implemented on the OS). ;)

 netfilter would be fine, but it'd be nice to not incur any state cost
 beyond what the UDP re-assembly engine is keeping already.

netfilter reuses the core reassembly logic (at least in IPv4, not yet
for IPv6). As soon as netfilter is active, packets will get reassembled
by netfilter and passed up the network stack without going in core
fragmentation cache again. So the TTLs would be kept in the frag queues
and further fragments would indicate to hard match the TTL on further
appends.  So that would be no problem to do. I really doubt it is wise
to do so.

Greetings,

  Hannes  

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2014-01-15 Thread Colm MacCárthaigh
For DNS, we have the option to respond with a TC=1 response, so if I
detected a datagram with suspicious or mismatching TTLs, TC=1 is a decent
workaround. TCP is then much more robust against intermediary spoofing. I
can't force the clients to use DF though.


On Wed, Jan 15, 2014 at 2:08 PM, Hannes Frederic Sowa 
han...@stressinduktion.org wrote:

 Hi!

 On Wed, Jan 15, 2014 at 01:26:20PM -0800, Colm MacCárthaigh wrote:
  Unfortunately I can't share data, but I have looked at a lot of it. In
  general, I've seen TTLs to be very stable. Most ECMP is flow-hashed these
  days and so as long as the path is stable, the TTLs should be identical.
 If
  there's some kind of transition mid-datagram, the the TTLs may
 legitimately
  mismatch, but those events seem to be very rare.

 Counterexample: Linux does not use flow-hased steered ECMP. You see the
 effect on end-hosts because of the route lookup caching in the socket
 (as long as it doesn't get invalidated or unconnected).

 The problem is that as soon as such a knob is provided people could
 generate DNS-blackholes (until timeout of resolver and retry with TCP,
 maybe this could be sped up with icmp error messages).  Only a couple
 of such non-flow-hased-based routed links would suffice to break the
 internet for a lot of users. I am pretty sure people will enable this
 knob as soon as it is provided and word is spread.

 If we want to accept that we could just force DF-bit on all fragments
 and ignore the users behind some specific minimal mtu. Would solve the
 problem more elegantly with same consequences. And error handling with
 DF-bit
 is better specified and handled by the kernel, thus more robust and better
 debugable (in case UDP path mtu discovery is implemented on the OS). ;)

  netfilter would be fine, but it'd be nice to not incur any state cost
  beyond what the UDP re-assembly engine is keeping already.

 netfilter reuses the core reassembly logic (at least in IPv4, not yet
 for IPv6). As soon as netfilter is active, packets will get reassembled
 by netfilter and passed up the network stack without going in core
 fragmentation cache again. So the TTLs would be kept in the frag queues
 and further fragments would indicate to hard match the TTL on further
 appends.  So that would be no problem to do. I really doubt it is wise
 to do so.

 Greetings,

   Hannes




-- 
Colm
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2014-01-15 Thread Hannes Frederic Sowa
On Wed, Jan 15, 2014 at 03:33:02PM -0800, Colm MacCárthaigh wrote:
 For DNS, we have the option to respond with a TC=1 response, so if I
 detected a datagram with suspicious or mismatching TTLs, TC=1 is a decent
 workaround. TCP is then much more robust against intermediary spoofing. I
 can't force the clients to use DF though.

That would need to be implemented as cmsg access ancillary data and cannot
be done as a netfilter module (unless the DNS packet generation is also
implemented as netfilter target). Because this touches core code, this
really needs strong arguments to get accepted. Maybe this can be done
as part of the socket fragmentation notification work. I'll have a look
but want to think about how easy this can get circumvented first. Maybe
you already thought about that?

Thanks,

  Hannes

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-28 Thread Haya Shulman

 That claim against having [injected] spoofed content into the DNS
 response (despite the use of Eastlake cookies for protection) is false
 unless that attack was against DNS clients and servers using DNS
 cookies, and not merely the cookies described in
 https://tools.ietf.org/html/draft-eastlake-dnsext-cookies-03
 but cookies in an as-yet unpublished proposal with a payload checksum.
 Note that I thought that there are no available implementations even
 of original flavor cookies.



You may have missed the beginning of that discussion... Paul Vixie already
suggested to add a CRC to protect against our fragmentation attacks, as
well as the new attack idea that I proposed earlier in this thread, fyi:


i expect that in consideration of your fragmentation work, he will add a
 32-bit CRC covering the full message to the EDNS option that contains the
 cookie.



In any case, it is great that you also agree that the published proposal
may be vulnerable and propose to use checksum to prevent those attacks.


On Sat, Oct 26, 2013 at 12:12 PM, Haya Shulman haya.shul...@gmail.comwrote:

  No number of hosts sending packets can exceed 25,000 500 Byte packets
 to a single 100 MHz 802.3 host. In fact, 802.3 preamble, headers, CRC,
 and IFG limit the 500 Byte packet rate to below 25K pps. However,
 multiple attacking hosts could cause excessive link-layer contention
 (nothing to do with host or host network interface interrupts or
 buffers) and so packet losses in either or both directions for
 legitimate DNS traffic and so the reported effects.



 Without data such as packet counts from standard tools such as `netstat`,
 my bet is what I said before, that the application fell behind, its socket
 buffer overflowed, and the results were as seen. However, I would not
 bet too much, because there are many other places where the DNS requests
 or responses could have been lost including:
 - intentional rate limiting in the DNS server, perhaps even RRL
 - intentional rate limiting in the kernel such as iptables
 - intentional rate limiting in a bridge (hub) in the path
 - unintentional link layer rate limiting due to contention for
 bridge buffers or wires. At full speed from the attacking systems,
 unrelated cross traffic through hubs in the path or on the wires
 to DNS server would cause packet losses including losses of valid
  answers and so timeouts and so the observed effect.


 No iptables rules were set; no RRL was applied (when sending bursts to
 incorrect/closed port - legitimate responses arrived ok).

 There are these measurments that studied loss due to traffic volume, and
 they found that Kernel loss occurs at above 100-150 Kpps (packets per
 second), with 64 byte packets. One of these works, in addition to measuring
 loss in kernel, also measured performance of snort under heavy load, and
 found loss could occur above 100KB.
 http://www.sciencedirect.com/science/article/pii/S143484110600063X
  http://www.sciencedirect.com/science/article/pii/S1084804509001040

 Two significant differences between our and their setting is that they
 used only a single host that generated the traffic, and the traffic
 consisted of 64  Byte packets (we used 500Byte packets).
 In any case, I am curious as to wether the loss occured in DNS software
 and if increasing the buffers in DNS software can mitigate the problem
 (I'll run it again to confirm).
 Thanks.


 --

 Best Regards,

 Haya Shulman

 Technische Universität Darmstadt

 FB Informatik/EC SPRIDE

 Mornewegstr. 30

 64293 Darmstadt

 Tel. +49 6151 16-75540

 www.ec-spride.de




-- 

Best Regards,

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Mornewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-26 Thread Daniel Kalchev

On 26.10.2013, at 12:37, Haya Shulman haya.shul...@gmail.com wrote:

 This is essentially an IP packet modification vulnerability and in order
 to do these, you don't even need fragmentation. This might happen even
 due to malfunctioning network adapter or other network device, not
 necessarily an attack. One of the reasons for DNSSEC existence is to
 prevent processing of damaged DNS data, with malicious origin or not.
 If you are concerned with improperly assembled IP packets, the DNS
 community is the wrong place to ask for a fix. The DNS community can
 only make sure their protocol takes care of such issues, and issues
 like this are totally addressed by technologies such as DNSSEC, TSIG
 etc. But the fundamental fix for this issue has to happen in the
 TCP/IP stack.
 
 
 
 IP does not, and was not designed to, guarantee security - only best effort 
 end-to-end delivery. The discussion was if Eastlake cookies can prevent the 
 attacks: the example I showed was a legitimate way to apply IP fragmentation 
 (which is a feature of IP - it is not a bug) to foil the protection offered 
 by Eastlate cookies and to inject spoofed content into the DNS response 
 (despite the use of Eastlake cookies for protection). This should be of 
 interest to DNS community, unless you argue that the DNS community should 
 rely on IP layer for security of DNS.
 

There is a technology, designed to handle this and other problems of DNS - 
well known as DNSSEC.

Many here, including me argue that instead of applying medicine that cures 
the symptoms, we cure the disease instead.

But, just like with the human medicine, there are apparently agendas that 
suggest keeping these symptomatic threat mento.

Daniel___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-26 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

  This is essentially an IP packet modification vulnerability and in order
  to do these, you don't even need fragmentation. This might happen even
  due to malfunctioning network adapter or other network device, not
  necessarily an attack. One of the reasons for DNSSEC existence is to
  prevent processing of damaged DNS data, with malicious origin or not.
  If you are concerned with improperly assembled IP packets, the DNS
  community is the wrong place to ask for a fix. The DNS community can
  only make sure their protocol takes care of such issues, and issues
  like this are totally addressed by technologies such as DNSSEC, TSIG
  etc. But the fundamental fix for this issue has to happen in the
  TCP/IP stack.

I do not understand why that paragraph was quoted, because Haya
Shulman's following paragraph is almost unrelated to it.  The main
point of the preceding paragraph seems to be

   DNSSEC protects DNS data intentional and accidential data change
   or corruption in the lower layers.  Fixes for other application
   protocols vulnerable to fragmentation attacks are off topic here.

 IP does not, and was not designed to, guarantee security - only best effort
 end-to-end delivery. The discussion was if Eastlake cookies can prevent the
 attacks: the example I showed was a legitimate way to apply IP
 fragmentation (which is a feature of IP - it is not a bug) to foil the
 protection offered by Eastlate cookies and to inject spoofed content into
 the DNS response (despite the use of Eastlake cookies for protection). 

That claim against having [injected] spoofed content into the DNS
response (despite the use of Eastlake cookies for protection) is false
unless that attack was against DNS clients and servers using DNS
cookies, and not merely the cookies described in
https://tools.ietf.org/html/draft-eastlake-dnsext-cookies-03
but cookies in an as-yet unpublished proposal with a payload checksum.
Note that I thought that there are no available implementations even
of original flavor cookies.

I do not understand how a blind attack against DNS cookies checksums
would work.  That makes me wonder whether these fragmentation attacks
are blind.  IP fragementation or TCP segment assembly attacks in which
the attacker can see all of the traffic are, to understate the case,
much less interesting.

If these attacks are blind, how do they fix the UDP checksum?  This
is somewhat relevant, because there are far easier attacks than IP
fragmentation for an attacker who can see DNS traffic.  It is only
somewhat relevant, because the answer is still DNSSEC.


This
 should be of interest to DNS community, unless you argue that the DNS
 community should rely on IP layer for security of DNS.

No one said anything like [relying] on IP layer for security of DNS.
As Paul Vixie repeatedly wrote, cookies is need to protect against 
reflection attacks.  On the other hand, DNSSEC is the source of DNS
data security.  Maybe that's why it's called DNSSEC.

 .


} From: Haya Shulman haya.shul...@gmail.com

} There are these measurments that studied loss due to traffic volume, and
} they found that Kernel loss occurs at above 100-150 Kpps (packets per
} second), with 64 byte packets. One of these works, in addition to measuring
} loss in kernel, also measured performance of snort under heavy load, and
} found loss could occur above 100KB.
} http://www.sciencedirect.com/science/article/pii/S143484110600063X

I see little relevant to current PC hardware or software in that
abstract of a 7 year old paper.  It does tend to contradict Haya
Shulman's apparently unsupported guesses about the causes of the
packet losses she observed.

} http://www.sciencedirect.com/science/article/pii/S1084804509001040

That paper is not as ancient, but it is even more irrelevant.


} Two significant differences between our and their setting is that they used
} only a single host that generated the traffic, and the traffic consisted of
} 64  Byte packets (we used 500Byte packets).
} In any case, I am curious as to wether the loss occured in DNS software and
} if increasing the buffers in DNS software can mitigate the problem (I'll
} run it again to confirm).

I do not understand why spend that effort is worthwhile.  If as I
suspect, increasing DNS forwarder socket buffer size tends to
mitigate the attack, we'll know something we already knew, that the
attack is less likely to work everywhere.  However, no DNS server
software maintainer would consider changing socket buffer sizes
based this issue.  The fix for DNS insecurity is DNSSEC.

This paper of Haya Shulman reports that by DNS request flooding of a
recursive, an attacker might determine the DNS client port numbers
used by the server.  That might be interesting by contradicting claims
DNS forwarding is secure because no one can know those port numbers.
(Never mind that the first I've heard of 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-25 Thread Stephane Bortzmeyer
On Thu, Oct 24, 2013 at 09:11:41AM +0300,
 Daniel Kalchev dan...@digsys.bg wrote 
 a message of 247 lines which said:

 This is not an attack on DNS, but an attack on IP reassembly
 technology.

Frankly, I do not share this way of seeing things. Since the DNS is,
by far, the biggest user of UDP and since TCP is already protected by
PMTUD, I do not think we can say it's not our problem.

 This might happen even due to malfunctioning network adapter or
 other network device, not necessarily an attack.

A random modification by a malfunctioning device or an errant cosmic
ray has a very small probability of being accepted (UDP checksum, DNS
checks, etc). We are talking here about a deliberate attack, by a
blind attacker.

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-25 Thread Stephane Bortzmeyer
On Tue, Oct 22, 2013 at 11:59:04PM +,
 Vernon Schryver v...@rhyolite.com wrote 
 a message of 50 lines which said:

 Why would there be extra support calls?  Wrong keys are no worse
 than wrong delegations 

Of course, they are worse. In the vast majority of cases, lame
delegations (or other mistakes) do not prevent resolution (as long as
one name server works). A wrong key can completely prevent resolution,
leading to a loss of service. The DNS is extremely robust, you have to
try very hard to break it. With DNSSEC, it's the opposite, you have to
be very careful for it to work.

 Why would registrars get support calls about validation problems?
 Do they get calls now (that they answer) from DNS resolver operators
 (other than big resolvers like Comcast) for lame delegations?

See above. I cannot visit http://www.онлайн/ while it works from
$OTHERISP so it's your fault.

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-25 Thread Stephane Bortzmeyer
On Tue, Oct 22, 2013 at 01:28:15PM -0700,
 Paul Vixie p...@redbarn.org wrote 
 a message of 24 lines which said:

 BIND9 V9.9 may surprise you. it has inline signing and automatic key
 management.

I don't think it is a fair description of BIND 9.9 abilities. It does
not manage keys (which, IMHO, mean creating them as necessary, remove
them when outdated, change them intelligently depending on the TTL,
etc). 
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-25 Thread Vernon Schryver
 From: Stephane Bortzmeyer bortzme...@nic.fr

  Why would there be extra support calls?  Wrong keys are no worse
  than wrong delegations 

 Of course, they are worse. In the vast majority of cases, lame
 delegations (or other mistakes) do not prevent resolution (as long as
 one name server works). A wrong key can completely prevent resolution,
 leading to a loss of service. The DNS is extremely robust, you have to
 try very hard to break it. With DNSSEC, it's the opposite, you have to
 be very careful for it to work.

Let's agree to somewhat disagree about that.  I've found giving one's
registrar the wrong IP address or glue a lot worse than a stupid
delegation in my own zone files.  DNSSEC needs a more effort than plain
DNS, but that almost none of extra effort has anything to with registrars.
Registrars/registries must sign your DNSSEC RRs, but they must now
also sign your other RRs, so that's no extra work for them.


  Why would registrars get support calls about validation problems?
  Do they get calls now (that they answer) from DNS resolver operators
  (other than big resolvers like Comcast) for lame delegations?

 See above. I cannot visit http://www.онлайн/ while it works from
 $OTHERISP so it's your fault.

I don't understand that.  Of course DNSSEC causes more support calls,
but the calls are to ISPs and IT groups and not to the registrars
trying to sabotage or delay DNSSEC for as long as possible supposedly
because of DNSSEC support calls.

And again, whether they do suffer more support calls *not*, let them
charge extra for adding DNSSEC records!  If they can profit from simple
DNS for US$10/year, then they could profit with DNSSEC for US$30/year.

A rational reason for the registrar DNSSEC sabotage is that the margins
on the PKI certs they flog to the punters are at lot more than US$30/year
(proof: free certs), and DNSSEC+DANE will eventually kill that cash
cow.  Yes, no doubt some registrars are too dumb to see that.


   ...

} From: Stephane Bortzmeyer bortzme...@nic.fr

}  This is not an attack on DNS, but an attack on IP reassembly
}  technology.
}
} Frankly, I do not share this way of seeing things. Since the DNS is,
} by far, the biggest user of UDP and since TCP is already protected by
} PMTUD, I do not think we can say it's not our problem.

How dos PMTUD protect TCP?  When since perhaps 1995 has PMTUD been seen
as protection instead of vulnerability, thanks to goobers with firewalls?

Why can't similar attacks using TCP segment assembly be mounted against
DNS/TCP?  I've heard of more segment assembly attacks than IP fragment
assembly attacks, albeit against TCP applications other than DNS.


}  This might happen even due to malfunctioning network adapter or
}  other network device, not necessarily an attack.
}
} A random modification by a malfunctioning device or an errant cosmic
} ray has a very small probability of being accepted (UDP checksum, DNS
} checks, etc). We are talking here about a deliberate attack, by a
} blind attacker.

(I thought these latest attacks are not blind, but never mind that.)

I've seen more vastly more bit rot undetected by UDP and TCP checksums
(esp. due to my own software and firmware bugs and bugs in green
hardware) than human attacks.  And the number of bad TCP and UDP
checksums reported by `netstat -s` on any even slightly busy host
should be worrisome.

Why does it matter whether the bad bits are natural or man made?
Why not turn on DNSSEC, declare victory, and go home?

Why continually rehash the falling DNS sky?  Aren't there enough other
security issues?  Some that I've heard about are incomparably worse
in consequenes as well as ease of attack (e.g. no hours of 100 Mbit/sec
flooding or per-target packet tuning to forge one measly DNS response).


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-24 Thread Daniel Kalchev


On 23.10.13 22:17, Haya Shulman wrote:


Sorry for the brief description earlier, fyi, a slightly more 
elaborate design:
The idea is to replace a single middle fragment, e.g., given n 
fragments, for n2, we replace some fragment, s.t., 1 i  n.
Assume n=3 (and also assume, for simplicity, that fragments arrive in 
order - adjusting for the general case is straightforward).
I want to replace fragment i=2 with a spoofed 2nd fragment.  The 
challenge is to place the spoofed 2nd fragment in IP defragmentation 
cache, such that it is (1) reassembled with the first fragment, but, 
(2) not overwritten by the 2nd legitimate fragment. If the attacker 
plants a spoofed second fragment in a defragmentation cache, it will 
be reassembled with the 1st authentic, but then will be overwritten by 
the legitimate 2nd fragment that will subsequently arrive. To ensure 
that the spoofed second fragment is not overwritten (by the 2nd 
legitimate fragment), we should set its offset to some lower value 
(i.e., this results in a gap - that has to be filled - in the 
resulting reassembled packet). Then when the 3rd (authentic) fragment 
arrives, it is further reassembled with them (1st and spoofed 2nd). 
What remains to do, is to fill the missing gap in 2nd fragment.
So, to launch this, the attacker has to send two fragments: a spoofed 
2nd fragment (which offset is lower than the offset of the authentic 
second fragment) before (or right after) triggering the DNS request, 
and after the thre fragments (authentic 1st, 3rd and spoofed 2nd) are 
reassembled a small fragment is sent to fill the missing bytes (in the 
spoofed 2nd fragment). Then the packet is ready to leave the IP 
defragmentation cache.


This is not an attack on DNS, but an attack on IP reassembly technology. 
Which might work or not work depending on how the particular TCP/IP 
stack functions.
This attack affects any IP based protocol and therefore should in no 
case be labeled DNS vulnerability.


This is essentially an IP packet modification vulnerability and in order 
to do these, you don't even need fragmentation. This might happen even 
due to malfunctioning network adapter or other network device, not 
necessarily an attack. One of the reasons for DNSSEC existence is to 
prevent processing of damaged DNS data, with malicious origin or not.


If you are concerned with improperly assembled IP packets, the DNS 
community is the wrong place to ask for a fix. The DNS community can 
only make sure their protocol takes care of such issues, and issues 
like this are totally addressed by technologies such as DNSSEC, TSIG 
etc. But the fundamental fix for this issue has to happen in the 
TCP/IP stack.


a side by side reading of your earlier draft 
(http://arxiv.org/pdf/1205.4011.pdf) and your current draft:




https://0a94266f-a-62cb3a1a-s-sites.googlegroups.com/site/hayashulman/files/fragmentation-poisoning.pdf?attachauth=ANoY7cpB1yJsBXMWL0_spxDjUMV9m5G_TjI98UgJE6OtoP98H-WrlRJ2AyJVhajdZ5za2vjZ14twuMHuB7NUcRW_EYv36scybuofLgPOwoU2Rvs7zpSnm_Qj3jA3noSc3ibX9b9_7tncZJdGca0FLY8SOrzMTY_O5bd0NPcwBXtDx9vtCjbRisMFf48MiOYFNO-66BY3iyGa584pJ0Sy2vYfI5ZKKCmvJhJsmY96N4XChK5cGgky8eg%3Dattredirects=0


...shows a remarkably different attitude toward dnssec. what led
to your reconsideration?


Your observation is correct. Initially it seemed that large responses 
were a consequence of DNSSEC. But, then we found other techniques to 
cause fragmentation, not related to DNSSEC, like spoofed ICMP 
fragmentation needed (reduces the MTU beyond 1.5KB - and removes the 
requirement of large responses), and malicious domains (created by the 
attacker), with large responses. This made it clear that the attacks 
were not an artifact of DNSSEC.


On the other hand, DNSSEC prevents these (and other known and future, 
unforeseen) attacks against DNS.


No technology, including DNSSEC claims to protect against future 
unforeseen attacks. DNSSEC in this case simply ignores packets that have 
invalid cryptographic signatures, for whatever reason. There might be 
other attacks on DNS that DNSSEC might not be able to defend against.


Daniel
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Dickson, Brian
Paul Vixie wrote:

Haya Shulman wrote:


  so if i add first weaponized by Haya Shulman this would settle the
  matter?

 Thank you, can you please use Amir Herzberg and Haya Shulman (I
 collaborated on this attack together with my phd advisor Amir Herzberg).

it shall be done.

Thank you.

upon deeper consideration, weaponized is the wrong verb, unless you have 
released your software. i can say first published if that will serve your 
purpose.

Sorry to join the discussion late.

FYI, I have been working on a proof-of-concept weaponized implementation of a 
fragmentation-based attack.
(My work is limited only to fragmentation, as I see that as the issue with the 
largest attack surface and which suffers from potential long-tail problems in 
mitigations.)

This work was inspired by Haya/Amir's work, although it did abstract things and 
go back to first principles on what to do and how to do it. The PoC code is a 
clean-room implementation.

I am also loosely collaborating with the CZ folks (Ondřej Surý et al) who are 
also doing their own independent PoC.

There was a presentation of this at the latest DNS-OARC meeting, as well as at 
the last RIPE meeting.

We will, of course, be keeping the code private, and will avoid releasing too 
many details.

When we have specific concrete results, we will share them in a responsible 
fashion.

Regardless of the specifics, the general result should be understood: the 
unsigned aspects of delegations, creates an exposure to poisoning which allows 
MitM, which facilitates a host of problems to anything which is not totally 
DNSSEC-signed and DNSSEC-validated.

Brian Dickson

P.S. Credit for weaponized even if the code is shared with strict controls, 
rather than released, would be welcome, at the appropriate time.

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Haya Shulman
On Tue, Oct 22, 2013 at 11:15 PM, Paul Vixie p...@redbarn.org wrote:

 Haya Shulman wrote:



so if i add first weaponized by Haya Shulman this would settle the
   matter?
 
  Thank you, can you please use Amir Herzberg and Haya Shulman (I
  collaborated on this attack together with my phd advisor Amir Herzberg).

 it shall be done.


 Thank you.


 upon deeper consideration, weaponized is the wrong verb, unless you have
 released your software. i can say first published if that will serve your
 purpose.



I have implemented the code of the attack which I ran in a lab setting
against Bind and Unbound, but it can't be released, since (1) it is an
attack and (2) it is not automated (I adjust it manually against each
target domain, and resolver, and response type, e.g., NXDOMAIN, referral or
answer).
In any case, `first published` is fine. Can you please cite the conference
version (i.e., IEEE Conference on Communications and Network Security (CNS)
2013).
Thank you.

  Eastlake cookies is a very neat proposal. In contrast to other
 challenge-response mechanisms, which reuse existing fields for security
 (while those fields were originally designed for a different purpose),
 e.g., source ports, Eastlake's proposal uses the EDNS to add randomness in
 order to authenticate communication between resolver and name server. So,
 you are right, it does prevent many attacks, but, it does not prevent all
 the attacks, particularly those that exploit fragmentation. For instance:

 1. what about an IP packet that is fragmented into three fragments, such
 that the EDNS OPT RR is in the third fragment? By replacing the second
 fragment, the attacker can inject malicious content.

 2. another example also involves IP fragmentation, however in this
 scenario the second fragment can be of any size, e.g., a single byte. The
 attacker overwrites the transport layer port of the first fragment, e.g.,
 to its own port and intercepts the packet (along with the cookie); replaces
 the DNS records and forwards the resulting response to the resolver.

 Both tricky but feasible.
 Correct me if I am wrong, but I think that the cookies would not prevent
 these (above) attacks.


 i can't tell whether you're wrong, there's not enough detail here. if
 you're able to replace the middle fragment, or perhaps replace all
 fragments except the last one, then only SIG(0) or TSIG or DNSSEC could
 stop you. however, my back of envelope estimate is that replacing the
 middle fragment with one the same size but different content is more than
 just tricky, and replacing all-but-the-last fragment would require many
 hours at 100MBit/sec, which to me places it out of consideration as an
 attack worth defending against.



You are right that cryptographic defenses, e.g., DNSSEC, prevent the
attack.

Sorry for the brief description earlier, fyi, a slightly more elaborate
design:
The idea is to replace a single middle fragment, e.g., given n fragments,
for n2, we replace some fragment, s.t., 1 i  n.
Assume n=3 (and also assume, for simplicity, that fragments arrive in order
- adjusting for the general case is straightforward).
I want to replace fragment i=2 with a spoofed 2nd fragment.  The challenge
is to place the spoofed 2nd fragment in IP defragmentation cache, such that
it is (1) reassembled with the first fragment, but, (2) not overwritten by
the 2nd legitimate fragment. If the attacker plants a spoofed second
fragment in a defragmentation cache, it will be reassembled with the 1st
authentic, but then will be overwritten by the legitimate 2nd fragment that
will subsequently arrive. To ensure that the spoofed second fragment is not
overwritten (by the 2nd legitimate fragment), we should set its offset to
some lower value (i.e., this results in a gap - that has to be filled - in
the resulting reassembled packet). Then when the 3rd (authentic) fragment
arrives, it is further reassembled with them (1st and spoofed 2nd). What
remains to do, is to fill the missing gap in 2nd fragment.
So, to launch this, the attacker has to send two fragments: a spoofed 2nd
fragment (which offset is lower than the offset of the authentic second
fragment) before (or right after) triggering the DNS request, and after the
thre fragments (authentic 1st, 3rd and spoofed 2nd) are reassembled a small
fragment is sent to fill the missing bytes (in the spoofed 2nd fragment).
Then the packet is ready to leave the IP defragmentation cache.

So, you are right that it is more involved, but, if the fragmentation
occurs at the same boundary, then it is doable. Just to clarify, when I say
that there is a vulnerability, I mean, that it can be exploited in
practice. As you (and others on this mailing list) mentioned, something
that can be done will not necessarily be exploited, e.g., if it is too
complex and the gain is not significant.

The second example is an extension of the description above.


 i believe that mark andrews of ISC is going to re-release eastlake
 cookies. i expect 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Haya Shulman

 I see I'm stupid for not seeing that in the first message. I did search
 for 'http' but somehow didn't see the URL. But why not simply repeat
 the URL for people like me? Why not the URL of the paper at the
 beginning instead of a list of papers?
 https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf



I did not realise that this was the problem, I thought that for some reason
you could not download from my site, indeed, using the url would have been
more convenient, sorry.

By searching for DNSSEC with my PDF viewer, I found what I consider
 too few references to the effectiveness of DNSSEC against the attacks.
 There is nothing about DNSSEC in the abstract, a list of DNSSEC problems
 early, and a DNSSEC recommendation in the conclusion that reads to me
 like a concession to a referee. Others will disagree.



Ok, thanks for this comment, please clarify which paper you are referring
to, and I will check if appropriate references could be added.


- forwarding to third party resolvers.

 I agree so strongly that feels like a straw man. I think
 forwarding to third pary resolvers is an intolerable and
 unnecessary privacy and security hole. Others disagree.
 - other mistakes
 that I think are even worse than forwarders.
 - DNSSEC
 Perhaps that will be denied, but I challenge others to read those
 papers with their litanies of DNSSEC issues and get an impression
 of DNSSEC other than sow's ear sold as silk. That was right
 for DNSSEC in the past. Maybe it will be right forever. I hope
 not, but only years will tell. As far as I can tell from a quick
 reading, the DNSSEC issues are valid, but are sometimes backward
 looking, perhaps due to publication delays. For example, default
 verifying now in server software and verifying by resolvers such
 as 8.8.8.8 should help the verifying situation.


Agreed and noted, thank you.

p.s. Can you please cc me when sending responses related to me? Thank you
in advance!

--
Best Regards,
Haya Shulman
Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Mornewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Haya Shulman
 I'm puzzled by the explanation of Socket Overloading in
 https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
 I understand it to say that Linux on a 3 GHz CPU receiving 25,000
 packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
 interrupt code that low level packet buffers overflow.



Just to clarify, the attacker ran (two to three sync-ed hosts, and the
burst was split among those hosts).
The packets rate is not the only factor, and actually burst concentration
is much more significant. Specifically, when the packets in the burst have
no (or almost no) interpacket delay, the impact is different; e.g., when
running the same evaluation with a single attacking host (even on same LAN)
no loss was incurred - even if the attacker transmitted constantly, since
both the attacking host, and the (store-and-foward) switch that the
attacker's host was connected to, also introduced delays between packets
(due to their own interrupts and delays), thus `spreading` the burst and
reducing its impact.

I will be happy to have your thoughts on this additional piece of info,
i.e., the significance of burst volume concentration (=no, or low,
interpacket delay in arriving burst), that may have been not clear from the
writeup in the paper (I will clarify this in the paper too - thanks).



 That puzzles me for reasons that might be summarized by considering
 my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
 only 40-60% of a 100 MHz CPU.
 https://groups.google.com/forum/#!topic/comp.sys.sgi.hardware/S0ZFRpGMPWA
 https://www.google.com/search?q=ttcp
 Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
 The FDDI firmware and driver avoided all interrupts when running at
 speed, but I think even cheap modern PCI Ethernet cards have interrupt
 bursting. Reasonable network hardware interrupts the host only when
 the input queue goes from empty to not empty or the output queue goes
 below perhaps half full full, and then only interupts after a delay
 equal to perhaps half a minimum sized packet on the medium. I wouldn't
 expect cheap PCI cards to be that reasonable, or have hacks such as
 ring buffer with prime number lengths to avoid other interrupts.
 Still, ...
 IRIX did what I called page flipping and what most call zero copy I/O
 for user/kernel-space copying, but modern CPUs are or can be screaming
 monsters while copying bytes which should reduce that advantage. It
 would be irrelevant for packets dropped in the driver, but not if the
 bottleneck is in user space such as overloaded DNS server.
 That old ttcp number was for TCP instead of UDP, which would be an
 advantage for modern Linux.
 So I would have guessed, without having looked at Linux network
 code for many years, that even Linux should be using less than 20%
 of a 3 GHz CPU doing not only interrupts but all of UDP/IP.


Thanks for this input, and for the reference.



 100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%
 Could the packet losses have been due to the system trying to send
 lots of ICMP Port-Unreachables? I have the confused impression that
 Socket Overloading can involve flooding unrelated ports.



But, why would ICMP errors cause loss?
Inbound packets have higher priority over outbound packets.



 How was it confirmed that kernel interrupt handling was the cause
 of the packet losses instead of the application (DNS server) getting
 swamped and forcing the kernel to drop packets instead of putting
 them into the application socket buffer? Were giant application
 socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
 (probably a 30 second change for BIND)



This a good question. So, this evaluation is based on the following
observation: when flooding closed ports, or other ports (not the ones on
which the resolver expects to receive the response) - no loss was incurred,
but all connections experience an additional latency; alternately, when
flooding the correct port - the response was lost, and the resolver would
retransmit the request after a timeout.



 25K qps is not a big queryperf number, which is another reason why I
 don't understand how only 25K UDP qps could swamp a Linux kernel. Just
 now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
 reported 24940 qps without RPZ on a 2-core 2.4 GHz CPU running FreeBSD
 9.0.
 What about the claims of Gbit/sec transfer speeds with Linux?
 https://www.google.com/search?q=linux+gigabit+ethernet+speed
 I'm not questioning the reported measurements; they are what they are.
 However, if they were due to application overload instead of interrupt
 processing, then there might be defenses such as giant socket buffers.


I just want to clarify, that I appreciate your questions/comments, and
questioning the results is of course an important contribution to the
research. Maybe the writeup requires clarification, e.g., I will check if
the text clearly explains the setup and evaluation, and maybe the
evaluation results were 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-23 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

  I'm puzzled by the explanation of Socket Overloading in
  https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
  I understand it to say that Linux on a 3 GHz CPU receiving 25,000
  packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
  interrupt code that low level packet buffers overflow.

 Just to clarify, the attacker ran (two to three sync-ed hosts, and the
 burst was split among those hosts).

No number of hosts sending packets can exceed 25,000 500 Byte packets
to a single 100 MHz 802.3 host. In fact, 802.3 preamble, headers, CRC,
and IFG limit the 500 Byte packet rate to below 25K pps.  However,
multiple attacking hosts could cause excessive link-layer contention
(nothing to do with host or host network interface interrupts or
buffers) and so packet losses in either or both directions for
legitimate DNS traffic and so the reported effects.


  Could the packet losses have been due to the system trying to send
  lots of ICMP Port-Unreachables?

 But, why would ICMP errors cause loss?

Sending ICMP packets requires resources, including wire and hub
occupancy, CPU cycles, interrupts, kernel lock contention, kernel
buffers, network hardware buffers, and so on and so forth.  Any or all
of that can increase losses among the target DNS requests and responses.


 Inbound packets have higher priority over outbound packets.

I either do not understand that assertion or I disagree with it.  I
would also not understand or disagree with the opposite claim.  At
some points in the paths between the wire and the application (or more
accurately, between the two applications on the two hosts), one could
say that input has higher or lower priority than output, but most of
the time the paths contend, mostly first-come-first-served for resources
including memory bandwidth, DMA engines, attention from the 802.3 state
machine, host/network firmware or hardware queues and locks, kernel
locks, application locks, and application thread scheduling.


  How was it confirmed that kernel interrupt handling was the cause
  of the packet losses instead of the application (DNS server) getting
  swamped and forcing the kernel to drop packets instead of putting

 This a good question. So, this evaluation is based on the following
 observation: when flooding closed ports, or other ports (not the ones on
 which the resolver expects to receive the response) - no loss was incurred,
 but all connections experience an additional latency; alternately, when
 flooding the correct port - the response was lost, and the resolver would
 retransmit the request after a timeout.

Ok, so a ~100 Mbit/sec attack on non-DNSSEC DNS traffic succeeded
on a particular LAN.  Without more information, how can more be
said?  Without more data we should not talk about interrupts, I/O
priority, or even whether the attack would work on any other LAN.


 I used the default buffers in OS and resolver. So, you think that it could
 be that the loss was on the application layer?...

I avoid talk about layers above the link layer, because the phrases
are generally at best unclear and confusing.  At worst, the phrases
are smoke screens.  In this case, there is no need to talk about an
application layer, because we are presumably talking about two
application programs that are BIND, NSD, and/or Unbound.  If BIND was
used, then I could (but would try not to) speculate about BIND's
threading and request/response handing and consequent request or
response dropping.

Without data such as packet counts from standard tools such as `netstat`,
my bet is what I said before, that the application fell behind, its socket
buffer overflowed, and the results were as seen.  However, I would not
bet too much, because there are many other places where the DNS requests
or responses could have been lost including:
  - intentional rate limiting in the DNS server, perhaps even RRL
  - intentional rate limiting in the kernel such as iptables
  - intentional rate limiting in a bridge (hub) in the path
  - unintentional link layer rate limiting due to contention for
 bridge buffers or wires.  At full speed from the attacking systems,
 unrelated cross traffic through hubs in the path or on the wires
 to DNS server would cause packet losses including losses of valid
 answers and so timeouts and so the observed effect.


One of the main factors of the attack is `burst
 concentration`. 

That suggests (but certainly does not prove) link layer contention
instead of my pet application socket buffer overflow.  (I mean
overloading of or contention for wires or hubs (or routers?).)


A meta-question should be considered.  How much time and attention
should be given to yet another attack that apparently requires 100
Mbit/sec floods (I don't recall that this paper said how long this
attack flood must continue) and only when DNSSEC is not used?  Many
of us could probably do more interesting 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Doug Barton

On 10/21/2013 08:54 AM, Keith Mitchell wrote:

Applying the same 5-years' now-outside hindsight to this, the benefits
of all that port randomization work seem murky at best - does anyone
have data on many real Kaminsky cache-poisoning attacks took place in
that time ?


The Kaminsky vulnerability was clear, and while not trivial to exploit 
was quite doable. The work that ISC and others did to address this was a 
huge service to the community. If it had not been done, I'm sure things 
in the last 5 years would have been pretty ugly.



The Herzberg/Shulman attacks seem even harder to exploit in
a real (as opposed to la) environment


I can't judge that, but I think the math that says focus on things that 
we see in the wild over things generally agreed to be academic/unlikely 
is a good one.


Doug

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Tony Finch
Colm MacCárthaigh c...@stdlib.net wrote:

 This thread concerns the vulnerabilities uncovered in the fragment
 attacks. One of those vulnerabilities is that domains can be rendered
 unresolvable; even when DNSSEC is enabled. That seems like something
 to take seriously.

I am incresingly doubtful that EDNS buffer sizes greater than the MTU are
a good idea.

Apart from avoiding fragments, are there other ways to mitigate this
attack? Perhaps by adjusting the way the recursive server handles lame
authorities, perhaps by making it more eager to re-fetch the delegation
from the parent authorities?

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Tony Finch
Vernon Schryver v...@rhyolite.com wrote:

 Have you turned on DNSSEC where you can?  If not, why not?

Can we have less of the ad hominem please.

Tony.
-- 
f.anthony.n.finch  d...@dotat.at  http://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Edward Lewis

On Oct 21, 2013, at 14:32, someone wrote:

 But who cares who got there first?  Every request
 I see for credit is recorded in my private accounting as a debit against
 the credibility of the person demanding credit, because credit demands
 suggest interests which suggest biases and so inaccuracy.



What drives the value downward of mailing lists are discussions like this.

One of the failings of the field of DNS is that there's no small set of 
libraries of documents.  As a result, most participants never do the 
literature search phase of research, instead they just go to code.  I'd call 
that experimenting, not researching.  Given the environment, crediting work to 
someone is almost impossible. But that is not something new and unique to the 
DNS or even the Internet.  Most inventions over time were just incremental 
changes to known technology but for some reason, on increment was more valuable 
than all the previous.  E.g., what Edison got right was the color of light, not 
the idea of radiating light from a wire.

As far as what Kaminsky contributed, in my estimation, the novelty was in the 
forging of UDP's sender address and flooding to perform cache poisoning.  
(Cache poisoning itself had been described in the 1990's, which is why there 
was a DARPA contract to develop DNSSEC from 1994 to 1998 or so.)  The DNSSEC 
development flotilla had long been considering how to defeat message 
insertions, that mechanism was not novel in Kaminsky's description.  His major 
contributions were first exposing how to perform an insertion attack when not 
on the path and secondly he visualized the consequences to people.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis 
NeuStarYou can leave a voice message at +1-571-434-5468

There are no answers - just tradeoffs, decisions, and responses.

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Rubens Kuhl
 
 Which brings me to the topic of resolver-behind-upstream attacks which were 
 not commented upon.
 As you know, one of the recommendations of experts and Internet operators, 
 following Kaminsky attack, was `either deploy patches or configure your 
 resolver to use a secure upstream forwarder`, e.g., OpenDNS was typically 
 recommended. The security is established since the resolver is hidden from 
 the Internet and sends its requests only via its upstream forwarder.
 This configuration is still believed to be secure and is recommended by 
 experts.

Would DNSCrypt, supported by OpenDNS, be a possible mitigation to this issue ? 
 
 As you know we found vulnerabilities in such configuration, and designed 
 techniques allowing to find the IP address of the hidden resolver, and then 
 to discover its port allocation (the attacks apply to per-destination ports 
 recommended in [RFC6056] or to fixed ports).
 This attack can be extremely stealthy and efficient, and applies to networks 
 where communication between the resolver and upstream forwarder is not over 
 TCP, and therefore can be fragmented (fragmentation of a single byte 
 suffices).

Would IPSEC between resolver and upstream forward be a possible mitigation to 
this issue ? 


Rubens

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Haya Shulman
On Tue, Oct 22, 2013 at 6:20 PM, Rubens Kuhl rube...@nic.br wrote:


 Which brings me to the topic of resolver-behind-upstream attacks which
 were not commented upon.
 As you know, one of the recommendations of experts and Internet operators,
 following Kaminsky attack, was `either deploy patches or configure your
 resolver to use a secure upstream forwarder`, e.g., OpenDNS was typically
 recommended. The security is established since the resolver is hidden from
 the Internet and sends its requests only via its upstream forwarder.
 This configuration is still believed to be secure and is recommended by
 experts.


 Would DNSCrypt, supported by OpenDNS, be a possible mitigation to this
 issue ?


  As you know we found vulnerabilities in such configuration, and designed
 techniques allowing to find the IP address of the hidden resolver, and then
 to discover its port allocation (the attacks apply to per-destination ports
 recommended in [RFC6056] or to fixed ports).
 This attack can be extremely stealthy and efficient, and applies to
 networks where communication between the resolver and upstream forwarder is
 not over TCP, and therefore can be fragmented (fragmentation of a single
 byte suffices).


 Would IPSEC between resolver and upstream forward be a possible mitigation
 to this issue ?


 Sure, both solve the problem. In particular, any secure channel protocol,
between the proxy resolver and an upstream forwarder, prevents the attacks.


 Rubens




-- 

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Mornewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Keith Mitchell
On 10/22/2013 10:52 AM, Haya Shulman wrote:

 Disclosing such potential vulnerabilities remains valuable work, 
 but I think careful consideration needs to be applied to the 
 engineering economics of the best operational-world mitigation 
 approaches.
 
 @/Keith Mitchell/

(My head is *really* hurting from this quotation formatting..)-:
(re-wrapping and indenting to list conventions...)

 I do not advocate to deploy these or other countermeasures. Above
 any doubt you are in the best position to decide which
 countermeasures to deploy.

Not really, OARC does not operate production service-providing
infrastructure except to support a membership organization, most of our
infrastructure is dedicated to data-gathering/testbed/research purposes.
So I defer to *real* DNS infrastructure operators and implementors on
any such judgments.

 The situation with DNS checkers is different from deployment of port 
 randomisation.  DNS checkers is a very important service to the 
 community and the efforts that their operators took to make them 
 available is very valuable. However, an illusion of security is more
  dangerous than not being protected at all (in the later case one is
  aware that he is not protected and may be attacked).

Fair enough.

 I admit that I do not know what economic effort is required to patch
  DNS checkers which report per-destination ports, recommended in 
 [RFC6056], as secure

Well, more than we've been able to dedicate in the past month or so. I'm
trying to get an estimate of this from those best placed to do the
actual work.

 but I suggested a fix to this vulnerability some time ago, that 
 should be fairly simple to implement;

Yes, but as I explained privately previously, there is no record of this
correspondence through official OARC channels - I did request you
re-send, but I don't have a copy of it.

 the problem with the porttest checker is that each IP address of the
  checker system receives a single query from the tested resolver, and
  so to each such IP address a random port is selected. But, if more 
 than a single query were sent to each checker IP during the test, 
 then the predictable sequence would be easily identified.

Thank you for this clarification - any further points you have about the
best way to implement the fix to this would be welcome, but are likely
best taken off-list.

Keith

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread P Vixie
On Tuesday, October 22, 2013 18:57:41 Haya Shulman wrote:
 
 On Tue, Oct 22, 2013 at 6:20 PM, Rubens Kuhl rube...@nic.br wrote:
 
 
  Would DNSCrypt, supported by OpenDNS, be a possible mitigation to this 
issue?
 ...
  Would IPSEC between resolver and upstream forward be a possible 
mitigation to this issue ?
 
 Sure, both solve the problem. In particular, any secure channel protocol,
 between the proxy resolver and an upstream forwarder, prevents the attacks.

so, if we develop eastlake cookies, which is necessary in any case due to the 
ddos reflection problems, then your fragmentation related problems go away?

vixie
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Haya Shulman
I am not sure what you mean by `official OARC channels`, I forwarded my
communication on this issue, with porttest operators, to you a month or so
ago. Maybe these were not official channels, but I have not contacted OARC
otherwise, via a different channel.
Can you please advise how to contact OARC through official channels?
Thank you.


On Tue, Oct 22, 2013 at 7:53 PM, Keith Mitchell ke...@dns-oarc.net wrote:

 On 10/22/2013 10:52 AM, Haya Shulman wrote:

  Disclosing such potential vulnerabilities remains valuable work,
  but I think careful consideration needs to be applied to the
  engineering economics of the best operational-world mitigation
  approaches.
 
  @/Keith Mitchell/

 (My head is *really* hurting from this quotation formatting..)-:
 (re-wrapping and indenting to list conventions...)

  I do not advocate to deploy these or other countermeasures. Above
  any doubt you are in the best position to decide which
  countermeasures to deploy.

 Not really, OARC does not operate production service-providing
 infrastructure except to support a membership organization, most of our
 infrastructure is dedicated to data-gathering/testbed/research purposes.
 So I defer to *real* DNS infrastructure operators and implementors on
 any such judgments.

  The situation with DNS checkers is different from deployment of port
  randomisation.  DNS checkers is a very important service to the
  community and the efforts that their operators took to make them
  available is very valuable. However, an illusion of security is more
   dangerous than not being protected at all (in the later case one is
   aware that he is not protected and may be attacked).

 Fair enough.

  I admit that I do not know what economic effort is required to patch
   DNS checkers which report per-destination ports, recommended in
  [RFC6056], as secure

 Well, more than we've been able to dedicate in the past month or so. I'm
 trying to get an estimate of this from those best placed to do the
 actual work.

  but I suggested a fix to this vulnerability some time ago, that
  should be fairly simple to implement;

 Yes, but as I explained privately previously, there is no record of this
 correspondence through official OARC channels - I did request you
 re-send, but I don't have a copy of it.

  the problem with the porttest checker is that each IP address of the
   checker system receives a single query from the tested resolver, and
   so to each such IP address a random port is selected. But, if more
  than a single query were sent to each checker IP during the test,
  then the predictable sequence would be easily identified.

 Thank you for this clarification - any further points you have about the
 best way to implement the fix to this would be welcome, but are likely
 best taken off-list.

 Keith




-- 

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Mornewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 Please read my first post in this thread, you should find all information
 there.

I see I'm stupid for not seeing that in the first message.  I did search
for 'http' but somehow didn't see the URL.  But why not simply repeat
the URL for people like me?  Why not the URL of the paper at the
beginning instead of a list of papers?
https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf

By searching for DNSSEC with my PDF viewer, I found what I consider
too few references to the effectiveness of DNSSEC against the attacks.
There is nothing about DNSSEC in the abstract, a list of DNSSEC problems
early, and a DNSSEC recommendation in the conclusion that reads to me
like a concession to a referee.  Others will disagree.

After skimming the papers at 
https://sites.google.com/site/hayashulman/publications
since at first I was not sure which one (my fault), I've the
impression that Haya Shulman doesn't like:

 - forwarding to third party resolvers.
I agree so strongly that feels like a straw man.  I think
forwarding to third pary resolvers is an intolerable and 
unnecessary privacy and security hole.  Others disagree.

 - other mistakes
 that I think are even worse than forwarders.

 - DNSSEC
Perhaps that will be denied, but I challenge others to read those
papers with their litanies of DNSSEC issues and get an impression
of DNSSEC other than sow's ear sold as silk.  That was right
for DNSSEC in the past.  Maybe it will be right forever.  I hope
not, but only years will tell.  As far as I can tell from a quick
reading, the DNSSEC issues are valid, but are sometimes backward
looking, perhaps due to publication delays.  For example, default
verifying now in server software and verifying by resolvers such
as 8.8.8.8 should help the verifying situation.


  work on DNSSEC improvements and bug fixes before or after your
  issues? 

 Requiring such answers from me is absolutely out of place, I am
 probably not aware of the constraints that organisations face in their
 every day operation of the Internet, and so I never argued which
 coutermeasures must be deployed and by whom. My goal is to identify
 vulnerabilities and investigate and recommend countermeasures that can
 prevent them. Each organisation should decide what solution suites its
 needs best, based on this and other information that is available to
 it.

That non-answer is absolutely out of place given Haya Shulman's
recommendations. It is unacceptable to preassume enough awareness
of constraints etc. to tell people 'Do this, that, and the other' but
be unwilling to say whether those actions should be done before or
after closely related work.  This is especially true in this mailing
list, because for operators the recommendations are functionally
equivalent to do nothing but wait for new DNS software.


 Port randomization is an extremely thin reed for security, because
  there are so few port number bits.

 There are techniques to artificially inflate ports' distribution, and we
 already described one technique in ESORICS'12 paper.

Would that paper be 
http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16
linked from https://sites.google.com/site/hayashulman/pub ? 
If so, where or how can I find a free version or a summary of the
notion?  Getting more than 16 bits of entropy from a 16 bit value
sounds interesting.  (I trust it's not that literal impossibility.)
I've heard of
  - jumbling domain case, 
 but that suffers from limitations in resolver cache code
 and it's not part of the UDP port number,
  - other fiddling with the payload, but they're not the port number,
  - the ID, but that's not the UDP port number,


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Keith Mitchell
On 10/22/2013 02:41 PM, Haya Shulman wrote:
 Yes, but as I explained privately previously, there is no record
 of this correspondence through official OARC channels - I did
 request you re-send, but I don't have a copy of it.
 
 I am not sure what you mean by `official OARC channels`, I forwarded 
 my communication on this issue, with porttest operators, to you a 
 month or so ago.

I've now tracked down the relevant correspondence, which you sent to a
couple of Verisign contacts with non-current OARC roles back in April
2012, then re-sent to me on 9th Sep. Sorry for saying you didn't send me
this, it's been a busy couple of months.

 Maybe these were not official channels, but I have not contacted
 OARC otherwise, via a different channel. Can you please advise how
 to contact OARC through official channels?

You already did this by communicating directly with me last month and
should continue to do so, thank you. I think we now have all the
disparate information we need to look into fixing the port tester, just
please understand that you are dealing with a community with many issues
to address and finite resources to do so.

Keith
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Jared Mauch

On Oct 22, 2013, at 7:42 AM, Daniel Kalchev dan...@digsys.bg wrote:

 I for one, do not believe DNSSEC is any difficult. I have turned DNSSEC 
 wherever I can. It has become easier and easier in the past few years to the 
 point I would call deploying DNSSEC today trivial. I have therefore changed 
 my stance with people considering DNSSEC deployment from careful, this stuff 
 needs special attention to good, encourage those guys.
 
 See, I can answer such questions. Why can't others?

It's difficult because there is not universal support amongst registrars.  Once 
again the wheel gets stuck when the technical side meets the business side.  
Before someone says switch registrar, it's usually not that easy and then 
becomes something resembling a full time project vs just throwing a switch.

Edit a zone file vs edit, run a script, upload some keys, roll some keys, do 
some other magic is harder than edit a zone file.

This runs into the same friction issue that using PGP and other tools 
encounter.  It seems simple enough to most folks, but when you add in someone 
less-technical, it goes off the rails quickly.  I can't count the number of 
times someone emailed me their full keyring or private key when they meant 
public.  It's not as easy as you think it is.

- Jared
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Michele Neylon - Blacknight

On 22 Oct 2013, at 20:28, Jared Mauch ja...@puck.nether.net
 wrote:
 
 
 It's difficult because there is not universal support amongst registrars.  
 Once again the wheel gets stuck when the technical side meets the business 
 side.  

It's not entirely business that causes the issues .. 

Registry operators do not have a consistent or uniform way of implementing 
DNSSEC, which makes integration more complex for registrars.

If, as a registrar, we only offered .com then it would be one thing, but that's 
not the case .. 



 Before someone says switch registrar, it's usually not that easy and then 
 becomes something resembling a full time project vs just throwing a switch.
 
 Edit a zone file vs edit, run a script, upload some keys, roll some keys, do 
 some other magic is harder than edit a zone file.
 
 This runs into the same friction issue that using PGP and other tools 
 encounter.  It seems simple enough to most folks, but when you add in someone 
 less-technical, it goes off the rails quickly.  I can't count the number of 
 times someone emailed me their full keyring or private key when they meant 
 public.  It's not as easy as you think it is.
 
 - Jared
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Mr Michele Neylon
Blacknight Solutions ♞
Hosting  Domains
ICANN Accredited Registrar
http://www.blacknight.co
http://blog.blacknight.com/
Intl. +353 (0) 59  9183072
US: 213-233-1612 
Locall: 1850 929 929
Direct Dial: +353 (0)59 9183090
Facebook: http://fb.me/blacknight
Twitter: http://twitter.com/mneylon
---
Blacknight Internet Solutions Ltd, Unit 12A,Barrowside Business Park,Sleaty
Road,Graiguecullen,Carlow,Ireland  Company No.: 370845

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Paul Vixie
Haya Shulman wrote:


   so if i add first weaponized by Haya Shulman this would
 settle the
   matter?
 
  Thank you, can you please use Amir Herzberg and Haya Shulman (I
  collaborated on this attack together with my phd advisor Amir
 Herzberg).

 it shall be done.


 Thank you.

upon deeper consideration, weaponized is the wrong verb, unless you
have released your software. i can say first published if that will
serve your purpose.

 Eastlake cookies is a very neat proposal. In contrast to other
 challenge-response mechanisms, which reuse existing fields for
 security (while those fields were originally designed for a different
 purpose), e.g., source ports, Eastlake's proposal uses the EDNS to add
 randomness in order to authenticate communication between resolver and
 name server. So, you are right, it does prevent many attacks, but, it
 does not prevent all the attacks, particularly those that exploit
 fragmentation. For instance:

 1. what about an IP packet that is fragmented into three fragments,
 such that the EDNS OPT RR is in the third fragment? By replacing the
 second fragment, the attacker can inject malicious content.

 2. another example also involves IP fragmentation, however in this
 scenario the second fragment can be of any size, e.g., a single byte.
 The attacker overwrites the transport layer port of the first
 fragment, e.g., to its own port and intercepts the packet (along with
 the cookie); replaces the DNS records and forwards the resulting
 response to the resolver.

 Both tricky but feasible. 
 Correct me if I am wrong, but I think that the cookies would not
 prevent these (above) attacks.

i can't tell whether you're wrong, there's not enough detail here. if
you're able to replace the middle fragment, or perhaps replace all
fragments except the last one, then only SIG(0) or TSIG or DNSSEC could
stop you. however, my back of envelope estimate is that replacing the
middle fragment with one the same size but different content is more
than just tricky, and replacing all-but-the-last fragment would
require many hours at 100MBit/sec, which to me places it out of
consideration as an attack worth defending against.

i believe that mark andrews of ISC is going to re-release eastlake
cookies. i expect that in consideration of your fragmentation work, he
will add a 32-bit CRC covering the full message to the EDNS option that
contains the cookie. since the cookies method is something we need
anyway (for DDoS prevention), we ought to depend on it to solve for
fragmentation as well.




  I absolutely agree with you, deploying DNSSEC on the end hosts
 would be
  ideal for security.

 wait, wait, that's not what i said. i said recursive dns should be
 on-premise
 or on-host, not wide-area. i said nothing about end to end dnssec.
 what are
 you specifically agreeing with?


 Yes you are right, I agree with you that it is best to validate on
 recursive resolver on-premise.

that is, again, not what i said. i want recursion, iteration, and
caching to occur on-premise or if necessary on-host. i will want this
even in the absence of globally deployed dnssec. i will also want
on-premise or if necessary on-host validation where possible, but that's
not what i was saying in the text you quoted.

 But, I also wanted to add, that I think best to validate on the end
 host; this would also prevent attacks by attackers that are on the
 same network with the client, e.g., wireless networks or networks of ISPs.

that's a controversial topic, as i expect followups to this thread to
demonstrate. i won't address it here.


  I think I may have been misinterpreted. I believe cryptography
 is important
  and efforts should be invested in deployment of DNSSEC. One of
 the goals of
  our work on DNS was to motivate adoption of DNSSEC.

 that's great to hear.


a side by side reading of your earlier draft
(http://arxiv.org/pdf/1205.4011.pdf) and your current draft:

https://0a94266f-a-62cb3a1a-s-sites.googlegroups.com/site/hayashulman/files/fragmentation-poisoning.pdf?attachauth=ANoY7cpB1yJsBXMWL0_spxDjUMV9m5G_TjI98UgJE6OtoP98H-WrlRJ2AyJVhajdZ5za2vjZ14twuMHuB7NUcRW_EYv36scybuofLgPOwoU2Rvs7zpSnm_Qj3jA3noSc3ibX9b9_7tncZJdGca0FLY8SOrzMTY_O5bd0NPcwBXtDx9vtCjbRisMFf48MiOYFNO-66BY3iyGa584pJ0Sy2vYfI5ZKKCmvJhJsmY96N4XChK5cGgky8eg%3Dattredirects=0
https://0a94266f-a-62cb3a1a-s-sites.googlegroups.com/site/hayashulman/files/fragmentation-poisoning.pdf?attachauth=ANoY7cpB1yJsBXMWL0_spxDjUMV9m5G_TjI98UgJE6OtoP98H-WrlRJ2AyJVhajdZ5za2vjZ14twuMHuB7NUcRW_EYv36scybuofLgPOwoU2Rvs7zpSnm_Qj3jA3noSc3ibX9b9_7tncZJdGca0FLY8SOrzMTY_O5bd0NPcwBXtDx9vtCjbRisMFf48MiOYFNO-66BY3iyGa584pJ0Sy2vYfI5ZKKCmvJhJsmY96N4XChK5cGgky8eg%3Dattredirects=0


...shows a remarkably different attitude toward dnssec. what led to your
reconsideration?

vixie
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Paul Vixie
Jared Mauch wrote:
 ...

 Edit a zone file vs edit, run a script, upload some keys, roll some keys, do 
 some other magic is harder than edit a zone file.

BIND9 V9.9 may surprise you. it has inline signing and automatic key
management. the code name for this feature set was DNSSEC For Humans
and was largely driven by joao damas. the only other magic that BIND9
can't help you with is telling your registrar about new KSK DS's, since
there's no standard API for a primary name server to use for
communication with the delegation server. in all other ways, BIND9 makes
DNSSEC as easy as edit a zone file. try it and report back, don't take
my word for it.

note, i'm not with ISC any more, but i see no reason not to stop singing
their praises.

vixie
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Rubens Kuhl

Em 22/10/2013, às 18:06:000, Michele Neylon - Blacknight escreveu:

 
 On 22 Oct 2013, at 20:28, Jared Mauch ja...@puck.nether.net
 wrote:
 
 
 It's difficult because there is not universal support amongst registrars.  
 Once again the wheel gets stuck when the technical side meets the business 
 side.  
 
 It's not entirely business that causes the issues .. 

.nl and .cz got massive registrar adoption to DNSSEC offering business 
incentives, so it seems business side accounts for most of it. 

 
 Registry operators do not have a consistent or uniform way of implementing 
 DNSSEC, which makes integration more complex for registrars.

Do you mean sec-DNS 1.0 (RFC 4310) x sec-DNS 1.1 (RFC 5910)?  DS or DNSKEY ? 
Both ? My guess is that sec-DNS 1.1 with DS and DNSKEY would work for all 
DNSSEC-signed EPP TLDs... 

 
 If, as a registrar, we only offered .com then it would be one thing, but 
 that's not the case .. 

Considering RFC 5910 is mandatory for all new gTLDs, and with that requirement 
being extended to gTLD renewals (.info, .biz, .org), it seems implementing RFC 
5910 cuts it. Even ccTLDs like .br (and others for sure) follow RFC 5910. 


Rubens







___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Jim Reid
On 22 Oct 2013, at 22:53, Rubens Kuhl rube...@nic.br wrote:

 .nl and .cz got massive registrar adoption to DNSSEC offering business 
 incentives, so it seems business side accounts for most of it. 

So where are the incentives for resolver operators? If they switch on DNSSEC 
validation and get extra calls to customer support as a result, who pays? How 
many calls does customer support get before this wipes out an ISP's profit 
margin? This is another hurdle that has to be overcome somehow if DNSSEC is to 
be adopted.

It's all well and good that registries offer bribes^Wincentives to their sales 
channel, but the demand side (ie validation) needs incentives too and their 
needs are very different from someone who sells domain names and DNSSEC signing 
services.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Vernon Schryver
I'm puzzled by the explanation of Socket Overloading in 
https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
I understand it to say that Linux on a 3 GHz CPU receiving 25,000
packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
interrupt code that low level packet buffers overflow.

That puzzles me for reasons that might be summarized by considering
my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
only 40-60% of a 100 MHz CPU.
https://groups.google.com/forum/#!topic/comp.sys.sgi.hardware/S0ZFRpGMPWA
https://www.google.com/search?q=ttcp

Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
The FDDI firmware and driver avoided all interrupts when running at
speed, but I think even cheap modern PCI Ethernet cards have interrupt
bursting. Reasonable network hardware interrupts the host only when
the input queue goes from empty to not empty or the output queue goes
below perhaps half full full, and then only interupts after a delay
equal to perhaps half a minimum sized packet on the medium.  I wouldn't
expect cheap PCI cards to be that reasonable, or have hacks such as
ring buffer with prime number lengths to avoid other interrupts.
Still, ...

IRIX did what I called page flipping and what most call zero copy I/O
for user/kernel-space copying, but modern CPUs are or can be screaming
monsters while copying bytes which should reduce that advantage.  It
would be irrelevant for packets dropped in the driver, but not if the
bottleneck is in user space such as overloaded DNS server.

That old ttcp number was for TCP instead of UDP, which would be an
advantage for modern Linux.

So I would have guessed, without having looked at Linux network
code for many years, that even Linux should be using less than 20%
of a 3 GHz CPU doing not only interrupts but all of UDP/IP.
  100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%

Could the packet losses have been due to the system trying to send
lots of ICMP Port-Unreachables?  I have the confused impression that
Socket Overloading can involve flooding unrelated ports.

How was it confirmed that kernel interrupt handling was the cause
of the packet losses instead of the application (DNS server) getting
swamped and forcing the kernel to drop packets instead of putting
them into the application socket buffer?  Were giant application
socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
(probably a 30 second change for BIND)

25K qps is not a big queryperf number, which is another reason why I
don't understand how only 25K UDP qps could swamp a Linux kernel.  Just
now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
reported 24940 qps without RPZ on a 2-core 2.4 GHz CPU running FreeBSD 9.0.

What about the claims of Gbit/sec transfer speeds with Linux?
https://www.google.com/search?q=linux+gigabit+ethernet+speed

I'm not questioning the reported measurements; they are what they are.
However, if they were due to application overload instead of interrupt
processing, then there might be defenses such as giant socket buffers.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Jo Rhett
I am not at liberty to disclose location or vendor, but I'm aware of linux 
boxes handling 20k PPS mixed UDP/TCP at an average 2% CPU. They aren't even 
modern boxes although a bit newer than the dual core that Vernon mentions 
below. In short, I agree completely with everything Vernon said here. I suspect 
outdated information or some other factor was involved.

On Oct 22, 2013, at 4:03 PM, Vernon Schryver v...@rhyolite.com wrote:
 I'm puzzled by the explanation of Socket Overloading in 
 https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
 I understand it to say that Linux on a 3 GHz CPU receiving 25,000
 packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
 interrupt code that low level packet buffers overflow.
 
 That puzzles me for reasons that might be summarized by considering
 my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
 only 40-60% of a 100 MHz CPU.
 https://groups.google.com/forum/#!topic/comp.sys.sgi.hardware/S0ZFRpGMPWA
 https://www.google.com/search?q=ttcp
 
 Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
 The FDDI firmware and driver avoided all interrupts when running at
 speed, but I think even cheap modern PCI Ethernet cards have interrupt
 bursting. Reasonable network hardware interrupts the host only when
 the input queue goes from empty to not empty or the output queue goes
 below perhaps half full full, and then only interupts after a delay
 equal to perhaps half a minimum sized packet on the medium.  I wouldn't
 expect cheap PCI cards to be that reasonable, or have hacks such as
 ring buffer with prime number lengths to avoid other interrupts.
 Still, ...
 
 IRIX did what I called page flipping and what most call zero copy I/O
 for user/kernel-space copying, but modern CPUs are or can be screaming
 monsters while copying bytes which should reduce that advantage.  It
 would be irrelevant for packets dropped in the driver, but not if the
 bottleneck is in user space such as overloaded DNS server.
 
 That old ttcp number was for TCP instead of UDP, which would be an
 advantage for modern Linux.
 
 So I would have guessed, without having looked at Linux network
 code for many years, that even Linux should be using less than 20%
 of a 3 GHz CPU doing not only interrupts but all of UDP/IP.
  100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%
 
 Could the packet losses have been due to the system trying to send
 lots of ICMP Port-Unreachables?  I have the confused impression that
 Socket Overloading can involve flooding unrelated ports.
 
 How was it confirmed that kernel interrupt handling was the cause
 of the packet losses instead of the application (DNS server) getting
 swamped and forcing the kernel to drop packets instead of putting
 them into the application socket buffer?  Were giant application
 socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
 (probably a 30 second change for BIND)
 
 25K qps is not a big queryperf number, which is another reason why I
 don't understand how only 25K UDP qps could swamp a Linux kernel.  Just
 now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
 reported 24940 qps without RPZ on a 2-core 2.4 GHz CPU running FreeBSD 9.0.
 
 What about the claims of Gbit/sec transfer speeds with Linux?
 https://www.google.com/search?q=linux+gigabit+ethernet+speed
 
 I'm not questioning the reported measurements; they are what they are.
 However, if they were due to application overload instead of interrupt
 processing, then there might be defenses such as giant socket buffers.
 
 
 Vernon Schryverv...@rhyolite.com
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

-- 
Jo Rhett
Net Consonance : net philanthropy to improve open source and internet projects.

Author of Instant Puppet 3 Starter: 
http://www.netconsonance.com/instant-puppet-3-starter-book/



___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Rubens Kuhl

Em 22/10/2013, às 20:40, Jim Reid escreveu:

 On 22 Oct 2013, at 22:53, Rubens Kuhl rube...@nic.br wrote:
 
 .nl and .cz got massive registrar adoption to DNSSEC offering business 
 incentives, so it seems business side accounts for most of it. 
 
 So where are the incentives for resolver operators? If they switch on DNSSEC 
 validation and get extra calls to customer support as a result, who pays? How 
 many calls does customer support get before this wipes out an ISP's profit 
 margin? This is another hurdle that has to be overcome somehow if DNSSEC is 
 to be adopted.
 
 It's all well and good that registries offer bribes^Wincentives to their 
 sales channel, but the demand side (ie validation) needs incentives too and 
 their needs are very different from someone who sells domain names and DNSSEC 
 signing services.

What I observed on a local level was connectivity providers that were once hit 
by DNS attacks, whether those attacks could be mitigated by DNSSEC or not, to 
rush into deploying DNSSEC. So besides profit margins, potential liability 
costs (like I was trying to use my Internet Banking and was defrauded) are 
also economic incentives to deploy DNSSEC-validating resolvers. 

Talking to connectivity providers indicated they would see more value in DNSSEC 
if both more domains and the most used domains were DNSSEC-signed. We addressed 
the first part and are coming close to half a million DNSSEC domains in .br 
(without offering bri^H^H^Hincentives to sales channels), but most Top-N sites 
are still not signed with DNSSEC, so they still have an excuse. That 
contradicts a cost-based view of the issue, as having more DNSSEC-signed 
popular domains will only lead to more support calls with resolution issues, so 
either they won't do it either way, or they are indeed acting on a value-based 
view of the issue. 


Rubens






___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-22 Thread Matt Rowley
Vernon Schryver wrote:
 I'm puzzled by the explanation of Socket Overloading in
 https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf

  I understand it to say that Linux on a 3 GHz CPU receiving 25,000
 packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
 interrupt code that low level packet buffers overflow.

 That puzzles me for reasons that might be summarized by considering
 my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
 only 40-60% of a 100 MHz CPU.

snip/

Just to reinforce Vernon and Jo's points, we have DNS servers running
Linux at ARIN pushing 25~30k packets per second.  Overall CPU
utilization (across all cores) is under 10%.  Interrupt rates tend to be
around 15~18k per second.

cheers,
Matt


___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Haya Shulman

 your text above shows a drastic misunderstanding of both dns and dnssec.
 a correctly functioning recursive name server will not promote
 additional or authority data to answers. poison glue in a cache can
 cause poison referrals, or poisoned iterations, but not poisoned answers
 given to applications who called gethostbyname(). dnssec was crafted
 with this principle firmly in mind, and the lack of signatures over glue
 is quite deliberate -- not an oversight -- not a weakness.


Poisoning resolvers' caches is one issue, and what the resolvers return to
applications is a different matter.
IMHO `cache poisoning` is accepting and caching spoofed records. Cache
poisoning is a building block that can be applied for more complex attacks,
e.g., to redirect applications to malicious servers or for DoS attacks.

As I wrote in an earlier email, poisoning glue records can result in denial
of service attacks on resolvers, since they _cache_ those spoofed records
(although they do not return them to applications). You have not addressed
this concern in your response, the issue you discuss is different and it is
what applications receive from resolvers. The vulnerability that I pointed
out, is not related to returning the spoofed glue records to applications.
The question whether DoS (as a result of cache poisoning) is a weakeness is
a different issue, I simply wrote that we identified this new
vulnerability, that even validating resolvers can cache spoofed glue from
attacker, and then remain stuck with those records (which may result in
degradation/denial of service).

thanks for clarifying that. i cannot credit your work in the section of
 my article where i wrote about fragmentation, because you were not the
 discoverer. in 2008, during the 'summer of fear' inspired by dan
 kaminsky's bug, i was a personal information hub among a lot of the dns
 industry. at one point florian weimer of BFK called me with a concern
 about fragmentation related attacks. i brought kaminsky into the
 conversation and the three of us brainstormed on and off for a week or
 so as to how to use fragments in a way that could cause dnssec data to
 be accepted as cryptoauthentic. we eventually gave up, alas, without
 publishing our original concerns, our various theories as to how it
 might be done, and why each such theory was unworkable.
 i was happy to cite your work in my references section because your
 explaination is clear and cogent, but since you were not the discoverer,
 i won't be crediting you as such.


Florian wrote in his response to me on this mailing list that he believed
that the attack was not feasible since he did not succeed at deploying it.
He identified that there was a vulnerability but did not provide a way to
exploit it.
For instance, Bernstein identified vulnerability with predictable ports
long before Kaminsky attacks, yet you still call that attack `Kaminsky
attack`.
The point is: claiming that P not equals NP won't credit you the result, if
someone else does the proof.
Unless I am misunderstading, there was no published vulnerability prior to
our result. Please clarify if I am wrong.


your answer is evasive and nonresponsive, and i beg you to please try
 again, remembering that the vulnerabilities you are reporting and the
 workarounds you're recommending will be judged according to engineering
 economics. if we assume that dnssec is practical on a wide enough scale
 that it could prevent the vulnerabilities you are reporting on, then
 your work is of mainly academic interest. as vernon said earlier today,
 none of the vulnerabilities you are reporting on have been seen in use.
 i also agree with vernon's assessment that none of them will ever be
 seen in use.


Even if they are of academic interest only, I still hope that the Internet
community can learn from them, and have an option to protect themselves.
Regarding applicability: initially there were claims that this attack was
not feasible in lab setting (btw none with clear explaination why it is not
feasible). I am glad that now that other groups are/have also validated
them this has changed.
Once a vulnerability is found, it will eventually be exploited, and the
vulnerabilities that we found allow stealth attacks - so I think claiming
that they are not launched in the wild is not based on a solid proof. BTW,
I will appreciate it if you could clarify why you think they are not
applicable and cannot be launched in the wild.

As part of the research measurements that I do currently, I am running
evaluations (against my own domain of course), and there are vulnerable
networks (there are also networks that are not vulnerable to fragmentation
attacks - e.g., since they block fragmented responses).

that's too many e.g.'s and external references for me to follow. each
 fragmentary concept you've listed above strikes me as a nonsequitur for
 source port randomization. can you dumb it down for me, please?


I think you asked me to provide few examples of 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Haya Shulman

 My problem with your findings is that your are grossly overstating
 their significance. None of them will ever be seen in the wild. As
 As http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16 and
 as I've said, showing the inevitiable weakness of port randomization
 is good.


We found and described the vulnerability and showed how to apply it against
standard and patched resolvers. Can you please clarify in what way our
results `grossly overstate` significance?
Your second argument is not precise, we, and recently others, showed these
attacks to be practical. Could you please explain why you are certain that
the attacks do not pose a practical risk?

I'm sorry, but I think the mention of DNSSEC in your paper exists only
 because others forced it. I'm forced to that belief by various things
 including your refusal admit the obvious about relative priorities and
 by statements like that sentence above that suggests that fixing port
 randomization could be easier than deploying DNSSEC in any except quite
 exceptional cases.


This conspiracy theory is intriguing...



On Sat, Oct 19, 2013 at 7:14 PM, Haya Shulman haya.shul...@gmail.comwrote:

 IMHO, DNSSEC is simply the natural defense against the attacks, which is why 
 I did not explicitly mention it, but I definitely had it in mind :-)

 Regarding the proxy-behind-upstream: to prevent the attacks DNSSEC has to be 
 deployed(and validated) on the proxy. Currently it seems that there are 
 proxies that signal support of DNSSEC (via the DO bit), but do not validate 
 responses, and validation is typically performed by the upstream forwarder.

 ---

 The complete absense of any mention of DNSSEC among those recommendations





 (or elsewhere) reads like an implicit claim that DNSSEC would not
 help.  Even if that claim was not intended, would it be accurate?

 Would DNSSEC make any of recommendations less necessary or perhaps
 even moot?  If DNSSEC by itself would be effective against cache
 poisoning, then isn't it among the recommendations, especially for
 Resolver-behind-Upstream?  Why aren't efforts to protect port
 randomization, hide hidden servers and so forth like trying to make
 it safe to use .rhosts and /etc/hosts.equiv files by filtering ICMP
 dedirects and IP source routing, and strengthening TCP initial sequence

 numbers?



 On Sat, Oct 19, 2013 at 6:53 PM, Haya Shulman haya.shul...@gmail.comwrote:

 This is correct, the conclusion from our results (and mentioned in all
 our papers on DNS security) is to deploy DNSSEC (fully and correctly). We
 are proponents of cryptographic defenses, and I think that DNSSEC is the
 most suitable (proposed and standardised) mechanism to protect DNS against
 cache poisoning. Deployment of new Internet mechanisms is always
 challenging (and the same applies to DNSSEC). Therefore, we recommend short
 term countermeasures (against vulnerabilities that we found) and also
 investigate mechanisms to facilitate deployment of DNSSEC.


 On Sat, Oct 19, 2013 at 6:05 PM, Phil Regnauld regna...@nsrc.org wrote:

 P Vixie (paul) writes:
  M. Shulman, your summary does not list dnssec as a solution to any of
 these vulnerabilities, can you explain why not? Vixie

 I was wondering about that, and went to look at the abstracts:

 http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16

 Security of Patched DNS

 [...]

 We present countermeasures preventing our attacks; however, we believe
 that our attacks provide additional motivation for adoption of DNSSEC
 (or other MitM-secure defenses).

 So at least this seems to be mentioned in the papers themselves
 (Id
 didn't pay to find out).

 But I agree that the summary would benefit from stating this, as
 it's
 currently only way to to avoid poisoning. Not stating it could
 lead
 some to believe that these attacks are immune to DNSSEC
 protection of
 the cache.

 Cheers,
 Phil




 --

 Haya Shulman

 Technische Universität Darmstadt

 FB Informatik/EC SPRIDE

 Morewegstr. 30

 64293 Darmstadt

 Tel. +49 6151 16-75540

 www.ec-spride.de




 --

 Haya Shulman

 Technische Universität Darmstadt

 FB Informatik/EC SPRIDE

 Morewegstr. 30

 64293 Darmstadt

 Tel. +49 6151 16-75540

 www.ec-spride.de




-- 

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Morewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

  My problem with your findings is that your are grossly overstating
  their significance. None of them will ever be seen in the wild. As
  As http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16 and
  as I've said, showing the inevitiable weakness of port randomization
  is good.

 We found and described the vulnerability and showed how to apply it against
 standard and patched resolvers. Can you please clarify in what way our
 results `grossly overstate` significance?

Your claims that your issues are a significant security problem are
grossly exaggerated, because they will never be seen in the wild.

 Your second argument is not precise, we, and recently others, showed these
 attacks to be practical. Could you please explain why you are certain that
 the attacks do not pose a practical risk?

The existence of a vulnerability does not imply that it will be used.
Bad guys only use profitable attacks, whether the profit is mischief
or money.  Exploiting your vulernabilities is too hard (less profitable)
compared to other vulnerabilities.

 I'm sorry, but I think the mention of DNSSEC in your paper exists only
  because others forced it. I'm forced to that belief by various things
  including your refusal admit the obvious about relative priorities and
  by statements like that sentence above that suggests that fixing port
  randomization could be easier than deploying DNSSEC in any except quite
  exceptional cases.

 This conspiracy theory is intriguing...

It would be conspiracy thinking if I claimed you worked with others
to (not) do something.  I'm only extrapolating from your consistent
evasions of my question about the relative importance of port
randomization and DNSSEC.

I notice that besides not answering the priority question, you also
did not say where we can read your paper to see whether you mention
DNSSEC only as a coerced afterthought.

However, I've been asking and you've been evading the wrong question.
Whether DNSSEC work should be done before or after randomizing ports
is moot, because I think everyone who might randomize DNS ports while
it matters has already done so.  The major DNS implementors have also
done most of what they can for DNSSEC.  That various junk CPE forwarders,
proxies, and resolvers doesn't randomize won't be changed while it
matters.  Those vendors and their ISP customers aren't listening and
care about neither DNSSEC nor port randomization.  The only people who
might act on your issues are DNS user who cannot do anything about
port randomization but could deploy DNSSEC.

And that brings me to something bad.  Your are giving people an
excuse to continue not deploying DNSSEC.  By not admitting that
DNSSEC is more important than randomizing ports, you are encouraging
people to continue waiting for others to fix the problem.  They are
often the same people who are waiting for everyone else to comply
with BCP 38.  Note that BCP 38 violations would probably figure in
any real exploit of your issues.


...

} From: Haya Shulman haya.shul...@gmail.com

} This year there were a number of injection attacks against TCP exploiting
} port randomisation algorithms recommended in [RFC6056]. Once port is known,
} TCP sequence number does not pose a significant challenge (although it is
} 32 bits, it is incremented sequentially within the connection and there are
} techniques to predict it) Port randomisation would prevent injections into
} TCP.

} For instance, name server pinning, identifying victim instances on cloud,
} derandomisation of communication over TOR. There are limitations to these
} attacks, but IMHO even if there are only few networks to which these
} attacks apply - these are still attacls. Port randomisation would prevent
} these attacks of course since the attacker would not know which .
} Port randomisation was also proposed as a countermeasure against DoS
} attacks (e.g., see here Denial of service protection with beaver).
}
} Please clarify why you think that port randomisation cannot prevent the
} attacks described above.

That is more wrong that right.  Port randomization is worth doing, but
it is a minor issue among TCP application security issues.  Port
randomization helps little because the range of ephemeral ports is
tiny and so cannot contain much entropy.  Attacks based on predicting
TCP sequence numbers require either being able to see TCP sequence
numbers, in which case you can also see port numbers, or brute force.
If you're flooding the target hoping to it a TCP window, you need only
increase your flood to hit a random port.

} Bernstein identified preditable ports to be vulnerable long ago, it is
} surprising to me, that after so many years, the community is still not
} convinced that port randomisation is signfiicant.

Outside the Steve Gibson School of Internet Security, significance
is relative.  There will always be an infinite number of vulnerabilities.
Competent defenses don't 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Colm MacCárthaigh
On Mon, Oct 21, 2013 at 12:17 AM, Paul Vixie p...@redbarn.org wrote:
 i apologize for my sloppy wording. i mean full deployment, in either case.
 your claims and your proposed workarounds will be evaluated through the lens
 of engineering economics. as vernon schryver has been (unsuccessfuly thus
 far) trying to explain, effort expended to defend against vulnerabilities
 will have to be prioritized alongside of, and in competition with, effort
 expended to deploy dnssec.

Economics also include costs. The operational cost of deploying DNSSEC
validation on resolvers remains high - there are still frequent key
rotation and signing errors that cause various DNS subtrees to be
unresolvable. Very few users are willing to accept that that is better
for them, which makes it hard to tell the average resolver operator
that turning on validation is a good idea.

 a correctly functioning recursive name server will not promote additional or
 authority data to answers. poison glue in a cache can cause poison
 referrals, or poisoned iterations, but not poisoned answers given to
 applications who called gethostbyname(). dnssec was crafted with this
 principle firmly in mind, and the lack of signatures over glue is quite
 deliberate -- not an oversight -- not a weakness.

If an attacker can cause the domain to be unresolvable, that seems
like a weakness.

 thanks for clarifying that. i cannot credit your work in the section of my
 article where i wrote about fragmentation, because you were not the
 discoverer. in 2008, during the 'summer of fear' inspired by dan kaminsky's
 bug,

Kaminsky wasn't the discoverer of the Kaminsky's bug either, it was
long known, yet here you credit him. Not that I mean to deny credit to
Kaminsky, he did a good job of publicising the vulnerability. Just as
Haya has done here.

 your answer is evasive and nonresponsive, and i beg you to please try again,
 remembering that the vulnerabilities you are reporting and the workarounds
 you're recommending will be judged according to engineering economics. if we
 assume that dnssec is practical on a wide enough scale that it could prevent
 the vulnerabilities you are reporting on, then your work is of mainly
 academic interest. as vernon said earlier today, none of the vulnerabilities
 you are reporting on have been seen in use. i also agree with vernon's
 assessment that none of them will ever be seen in use.

Back before Kaminsky made the need for port-randominsation undeniable
with an actual working PoC, this sounds like the ISC/Bind response to
port randomisation attacks. Other implementors and operators made a
better judgement avoided the problem entirely, taking the cautious
path. 5 years later, are you really saying we should ignore another
attack vector?

The impact even with DNSSEC fully enabled seems concerning enough to
warrant attention.

-- 
Colm
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Edward Lewis
On Oct 21, 2013, at 11:54, Keith Mitchell wrote:
 
 Then ISC/BIND response to Kaminsky in 2008 was to burn perhaps 50% of
 the company's product-wide development and support resources over that
 year to co-ordinating, fixing, disclosing, patching, releasing and
 evangelizing the solution to the problem. While at the time it felt to
 us like great public benefit work was being done for the community, even
 by the end of that year it was becoming clear it was not a particularly
 great business decision.


Over the weekend there was a CENTR Technical Workshop (the day before RIPE 67 
and in the same location) where a panel was held on the recent DNS 
vulnerabilities as reported at DNS-OARC 7 days earlier.  One of the thoughts 
that emerged (IMHO) was to set priorities like this: design-away the 
theoretic/academic described vulnerabilities, reserving trench-warfare 
techniques to battle attacks we learn from packet captures.  Given limited 
resources, that is how I'd spend them.

So, yes, I believe this - in retrospect (referring to Keith's report) a lot of 
resources were burned to stem an attack that never really materialized.  
Possibly because of the fix, but we will never know.

Oddly, during the CENTR meeting and during the RIPE DNS WG meeting that 
followed, the quote in the long run we are all dead [0] was uttered 
independently (different speakers) to mean that it's fine to design into the 
future, but we need to eat now.  Under that banner, RRL serves an important 
purpose by staving off the apocalypse, even if (and I do mean if) it's benefit 
is temporary.

This assumes that there is someone designing away vulnerabilities into the 
future, which I fear is a bad assumption currently.  Most delivered techniques 
are triage with anything requiring major architectural rework considered to be 
too far off into the future to even being.  I don't think DNSSEC would stand 
a chance starting from scratch today, given how avenues of innovation have 
changed.

[0] 
http://en.wikipedia.org/wiki/In_the_long_run_we_are_all_dead#Macroeconomic_usages

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis 
NeuStarYou can leave a voice message at +1-571-434-5468

There are no answers - just tradeoffs, decisions, and responses.

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net


 Economics also include costs. The operational cost of deploying DNSSEC
 validation on resolvers remains high - there are still frequent key
 rotation and signing errors that cause various DNS subtrees to be
 unresolvable.

On what do you base your claims about the fatal costs of DNSSEC
validation?
I claim relevant knowledge and experience, not just from code I wrote
a few years ago to reduce the costs of DNSSEC on very large resolvers,
but from signing my own domains and enabling validation on all of the
resolvers that I control.  My domains and resolvers are insignificant,
but I hope I would have noticed any fatal costs.

Are you aware that Comcast's resolvers have been validating for some
time?  I think Google is also validating based on a Webmaster, your
web page is not available to your spider messages after a configuration
error in my signing machinery, but am not sure.  Does that conflict
with your claims about the fatal costs of validating?

Yes, I've noticed that Google is still not signing.  Maybe the
continuing hijackings of their ccTLD domains will move them.


 If an attacker can cause the domain to be unresolvable, that seems
 like a weakness.

True, but the right question is not Does DNSSEC add vulnerabilities?
but Overall, is DNS more or less secure with DNSSEC? or Among all
of the things I can do, what will improve the security of my users and
the Internet in general?

Defenders who care about the security of their systems and the Internet
in general don't pick and choose among weaknesses based only on what
is easiest, what can be punted to others, or what contributes to their
reputations.  They don't do as Steve Gibson did and harp on the bogus
catastrophy of Windows XP raw sockets to enhance his reputation and
sell his services.


 Kaminsky wasn't the discoverer of the Kaminsky's bug either, it was
 long known, yet here you credit him. Not that I mean to deny credit to
 Kaminsky, he did a good job of publicising the vulnerability. Just as
 Haya has done here.

I suspect Kaminsky got the credit because he had been contributing to
the field for years.  But who cares who got there first?  Every request
I see for credit is recorded in my private accounting as a debit against
the credibility of the person demanding credit, because credit demands
suggest interests which suggest biases and so inaccuracy.

Yes, I've heard of Kaminsky's business interests, and so I don't
take his announcements at face value.  You should also discount my
credibility based on my pecunary or other interests.  Where you
can't determine my interests, act on your best guess.


 Back before Kaminsky made the need for port-randominsation undeniable
 with an actual working PoC, this sounds like the ISC/Bind response to
 port randomisation attacks. Other implementors and operators made a
 better judgement avoided the problem entirely, taking the cautious
 path. 5 years later, are you really saying we should ignore another
 attack vector?

Who besides you and Haya Shulman has said anything about not randomizing
ports?  What port randomization improvements do you think are needed
in current releases of any major DNS implementation?  Where port
randomization problems exist such as in junk CPE that won't get fixed
before I retire, what contributes most to solutions, selling 
$29.95/#24.95/£19.95 academic papers or turning on DNSSEC?

The issue for me is one of relative priorities.  Among all Internet
security issues that I might touch, which should get my attention
and effort?  By remaining silent about emphasising port randomization
over DNSSEC (or using distant instead of nearby validating resolvers)
would I help or harm?


 The impact even with DNSSEC fully enabled seems concerning enough to
 warrant attention.

Let's agree that ports ought to be as random as TCP ISNs, improve port
randomness where each of us can, and stop implying that anyone thinks
or says otherwise.  Let's also stop the DNSSEC is a problem stuff.

Finally let's consider how you are helping.  Is there anything you can
do to improve port randomization?  If you are committer in any open
or proprietary source trees, will you make any needed port randomization
fixes?  Have you deployed DNSSEC?  What about BCP 38, since cache
poisoning is likely to depend on BCP 38 violations?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Colm MacCárthaigh
On Mon, Oct 21, 2013 at 11:32 AM, Vernon Schryver v...@rhyolite.com wrote:
 From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= c...@stdlib.net
 Economics also include costs. The operational cost of deploying DNSSEC
 validation on resolvers remains high - there are still frequent key
 rotation and signing errors that cause various DNS subtrees to be
 unresolvable.

 On what do you base your claims about the fatal costs of DNSSEC
 validation?

I wrote that the costs are high, not fatal. http://dns.comcast.net/
serves as a reasonable, though not complete, public example list of
issues. http://dns.comcast.net/ serves as a reasonable, though not
complete, example list of real issues.

 If an attacker can cause the domain to be unresolvable, that seems
 like a weakness.

 True, but the right question is not Does DNSSEC add vulnerabilities?
 but Overall, is DNS more or less secure with DNSSEC? or Among all
 of the things I can do, what will improve the security of my users and
 the Internet in general?

This thread concerns the vulnerabilities uncovered in the fragment
attacks. One of those vulnerabilities is that domains can be rendered
unresolvable; even when DNSSEC is enabled. That seems like something
to take seriously.

 Kaminsky wasn't the discoverer of the Kaminsky's bug either, it was
 long known, yet here you credit him. Not that I mean to deny credit to
 Kaminsky, he did a good job of publicising the vulnerability. Just as
 Haya has done here.

 I suspect Kaminsky got the credit because he had been contributing to
 the field for years.  But who cares who got there first?

Evidently Paul Vixie does. That's what I was responding to.

 Let's agree that ports ought to be as random as TCP ISNs, improve port
 randomness where each of us can, and stop implying that anyone thinks
 or says otherwise.

O.k., but what about fragmentation point randomisation, or randomized
DNS payload padding?

-- 
Colm
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Michele Neylon - Blacknight

On 21 Oct 2013, at 19:32, Vernon Schryver v...@rhyolite.com wrote:
 
 Yes, I've noticed that Google is still not signing.  Maybe the
 continuing hijackings of their ccTLD domains will move them.

I suspect they're more interested in getting registry lock in place rather 
than DNSSEC.

Most of the attacks against Google have involved changing the name servers 
completely .. 




Mr Michele Neylon
Blacknight Solutions ♞
Hosting  Domains
ICANN Accredited Registrar
http://www.blacknight.co
http://blog.blacknight.com/
Intl. +353 (0) 59  9183072
US: 213-233-1612 
Locall: 1850 929 929
Direct Dial: +353 (0)59 9183090
Facebook: http://fb.me/blacknight
Twitter: http://twitter.com/mneylon
---
Blacknight Internet Solutions Ltd, Unit 12A,Barrowside Business Park,Sleaty
Road,Graiguecullen,Carlow,Ireland  Company No.: 370845

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Warren Kumari

On Oct 21, 2013, at 4:39 PM, Phil Regnauld regna...@nsrc.org wrote:

 Michele Neylon - Blacknight (michele) writes:
 
 Yes, I've noticed that Google is still not signing.  Maybe the
 continuing hijackings of their ccTLD domains will move them.
 
 I suspect they're more interested in getting registry lock in place rather 
 than DNSSEC.
 
   That'd be assuming most registries have the concept of lock, which is
   far from being the case.

Some do, some don't… 
In some cases the registry lock is actually just a comment in a zone file, 
saying something along  the lines of:
;  WARNING -
; Don't change this!
; Call Warren at +1-xxx-xxx- before making any changes.
;  WARNING ---

In a number of cases registries don't officially support locks, but have been 
willing to do something unusual for a beer / friend.

 
 Most of the attacks against Google have involved changing the name servers 
 completely .. 
 
   Through social engineering and sometimes through directed attacks, yes.

Sadly yes. 

W

 
   Cheers,
   Phil
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
 

---
Tsort's Constant: 
1.67563, or precisely 1,237.98712567 times the difference between the distance 
to the sun and the weight of a small orange. 
-- Terry Pratchett, The Light Fantastic (slightly modified)

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-21 Thread Vernon Schryver
 From: Warren Kumari war...@kumari.net

  I suspect they're more interested in getting registry lock in place 
  rather than DNSSEC.

  Most of the attacks against Google have involved changing the name servers 
  completely .. 
  
  Through social engineering and sometimes through directed attacks, yes.

 Sadly yes. 

I trust we all agree that cache attacks with non-random ports,
fragmentation, or padding are irrelevant except perhaps indirectly
through the general (lack of) value of DNSSEC that I claim better
prevents cache attacks than random ports.

Wouldn't DNSSEC have not made things worse and possibly made them
better by:
  - making the social engineering more difficult by forcing the bad
  guys to change key as well as NS RRs
  - possibly making the bogus records fail to validate for a while
 at the start of the attack, thanks what might look like an
 unplanned KSK change.
  - possibly making the bogus records fail to validate sooner and so
 get ignored sooner after the registrar records are restored, again
 thanks to what might look like an unplanned KSK change.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Haya Shulman
Sorry for delay, I was omw to a different continent.
Please see response below.


On Sat, Oct 19, 2013 at 9:21 PM, Paul Vixie p...@redbarn.org wrote:

 Haya Shulman wrote:

 You are absolutely right, thanks for pointing this out.


 thanks for your kind words, but, we are still not communicating reliably
 here. see below.


  DNSSEC is the best solution to these (and other) vulnerabilities and
 efforts should be focused on its (correct) adoption (see challenges here:
 http://eprint.iacr.org/2013/254).
 However, since partial DNSSEC deployment may introduce new
 vulnerabilities, e.g., fragmentation-based attacks, the recommendations,
 that I wrote in an earlier email, can be adopted in the short term to
 prevent attacks till DNSSEC is fully deployed.


 by this, do you mean that you have found a fragmentation based attack that
 works against DNSSEC?

 One of the factors, causing fragmentation, is signed responses (from zones
that adopted DNSSEC). Signed responses can be abused for DNS cache
poisoning in the following scenarios: (1) when resolvers cannot establish a
chain-of-trust to the target zone (very common), or (2) when resolvers do
not perform `strict validation` of DNSSEC. As we point out in our work,
many resolvers currently support such mode (some implicitly, others
explicitly, e.g., Unbound), i.e., signal support of DNSSEC, however accept
and cache spoofed responses (or, e.g., responses with missing or expired
keys/signatures).
According to different studies, it is commonly accepted that only about 3%
of the resolvers perform validation. One of the reasons for support of
permissive DNSSEC validation is interoperability problems, i.e.,
clients/networks may be rendered without DNS functionality (i.e., no
Internet connectivity for applications) if resolvers insist on strict
DNSSEC validation, and e.g., discard responses that are not properly signed
(i.e., missing signatures).

Our attacks apply to such resolvers. Furthermore, many zones are
misconfigured, e.g., the parent zone may serve an NSEC (or NSEC3) in its
referal responses, while the child is signed (e.g., this was the case with
MIL TLD).

 by this, do you mean that if DNSSEC is widely deployed, your other
 recommendations are unnecessary?


Some of our recommendations are still relevant even if DNSSEC is widely
deployed. We showed attacks that apply to properly signed zones and
strictly validating resolvers. Since referral responses are not signed, the
attacker can inject spoofed records (e.g., A records in glue) which will be
accepted by the resolvers. Such cache poisoning can be used for denial (or
degradation) of service attacks - a strictly validating resolver will not
accept unsigned responses from the attacker and will be stuck with the
malicious cached name server records (unless the resolver goes back to the
parent zone again - however such behaviour is not a security measure and
should not be relied upon).

Furthermore, in the proxy-behind-upstream setting, even when DNSSEC is
supported by all zones and is validated by the upstream forwarder, but not
by the proxy, the proxy can be attacked. Ideally validation should be at
the end hosts - we are not there yet with DNSSEC.


 in your next message you wrote:

 Haya Shulman wrote:

 ..., the conclusion from our results (and mentioned in all our papers on
 DNS security) is to deploy DNSSEC (fully and correctly). We are proponents
 of cryptographic defenses, and I think that DNSSEC is the most suitable
 (proposed and standardised) mechanism to protect DNS against cache
 poisoning. Deployment of new Internet mechanisms is always challenging (and
 the same applies to DNSSEC). Therefore, we recommend short term
 countermeasures (against vulnerabilities that we found) and also
 investigate mechanisms to facilitate deployment of DNSSEC.


 in 2008, we undertook the short term (five years now) countermeasure of
 source port randomization, in order to give us time to deploy DNSSEC. if
 five years made no difference, and if more short term countermeasures are
 required, then will another five years be enough? perhaps ten years?
 exactly how long is a short term expected to be?

 for more information, see:


 http://www.circleid.com/posts/20130913_on_the_time_value_of_security_features_in_dns/


Thanks, you summarised this very nicely. I'd like to bring it to your
attention that, in contrast to other sections, you did not cite our work
explicitly, in a section where you describe our fragmentation based attacks
(please add it).
My response to your question is the following: DNSSEC is a new mechanism,
crucial for long term (future) security of the Internet. The concern that
you are raising applies to other new mechanisms as well, e.g., BGP security
and even IPv6, and is not specific to DNSSEC. Deploying new mechanisms in
the Internet has always been a challenge, and the mechanisms may go through
a number of adaptations during the incremental deployment phases, and
intermediate transition mechanisms 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Haya Shulman

 In that case, on what should an organization spend time or money

first, on DNSSEC or the recommendations in the mail message?  Would
it be better if each of the recommendations in the mail message
started with something like this?

Deploy DNSSEC, and consider the follow to help protect cached

data not yet protected with DNSSEC.



It's a good point, thanks. I will rewrite the recommendations according to
what is essential and also against which type of attack and to what network
configuration it applies.


That sounds like a more significant bug than port obscurity or

randomization.  If it is a bug, which should be addressed first in
that software or those installations, this DNSSEC bug or the
recommendations in the mail message?  It it is a significant DNSSEC
bug, it would be good if a future version of the mail message

mentioned it.


It is not always a bug imho. Some resolvers, e.g., unbound, explicitly
allow such permissive modes of DNSSEC validation, others support this
implicitly and the rest may simply be not configured properly.
Permissive modes are typically used during the incremental deployment
phases prior to full adoption, e.g., to see that DNSSEC works ok, and does
not break anything.
Permissive mode introduces a security vulnerability - since a resolver
signals support of DNSSEC, it receives large (often fragmented) responses,
and thus may be vulnerable to our cache poisoning attack. On the other
hand, network operators, may be concerned (often justly) with enforcing
strict DNSSEC validation, due to interoperability (or other) problems (we
discuss this in more detail in `Availability and Security Challenges
Towards Adoption of DNSSEC`).





On Sat, Oct 19, 2013 at 7:14 PM, Haya Shulman haya.shul...@gmail.comwrote:

 IMHO, DNSSEC is simply the natural defense against the attacks, which is why 
 I did not explicitly mention it, but I definitely had it in mind :-)

 Regarding the proxy-behind-upstream: to prevent the attacks DNSSEC has to be 
 deployed(and validated) on the proxy. Currently it seems that there are 
 proxies that signal support of DNSSEC (via the DO bit), but do not validate 
 responses, and validation is typically performed by the upstream forwarder.

 ---

 The complete absense of any mention of DNSSEC among those recommendations





 (or elsewhere) reads like an implicit claim that DNSSEC would not
 help.  Even if that claim was not intended, would it be accurate?

 Would DNSSEC make any of recommendations less necessary or perhaps
 even moot?  If DNSSEC by itself would be effective against cache
 poisoning, then isn't it among the recommendations, especially for
 Resolver-behind-Upstream?  Why aren't efforts to protect port
 randomization, hide hidden servers and so forth like trying to make
 it safe to use .rhosts and /etc/hosts.equiv files by filtering ICMP
 dedirects and IP source routing, and strengthening TCP initial sequence

 numbers?



 On Sat, Oct 19, 2013 at 6:53 PM, Haya Shulman haya.shul...@gmail.comwrote:

 This is correct, the conclusion from our results (and mentioned in all
 our papers on DNS security) is to deploy DNSSEC (fully and correctly). We
 are proponents of cryptographic defenses, and I think that DNSSEC is the
 most suitable (proposed and standardised) mechanism to protect DNS against
 cache poisoning. Deployment of new Internet mechanisms is always
 challenging (and the same applies to DNSSEC). Therefore, we recommend short
 term countermeasures (against vulnerabilities that we found) and also
 investigate mechanisms to facilitate deployment of DNSSEC.


 On Sat, Oct 19, 2013 at 6:05 PM, Phil Regnauld regna...@nsrc.org wrote:

 P Vixie (paul) writes:
  M. Shulman, your summary does not list dnssec as a solution to any of
 these vulnerabilities, can you explain why not? Vixie

 I was wondering about that, and went to look at the abstracts:

 http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16

 Security of Patched DNS

 [...]

 We present countermeasures preventing our attacks; however, we believe
 that our attacks provide additional motivation for adoption of DNSSEC
 (or other MitM-secure defenses).

 So at least this seems to be mentioned in the papers themselves
 (Id
 didn't pay to find out).

 But I agree that the summary would benefit from stating this, as
 it's
 currently only way to to avoid poisoning. Not stating it could
 lead
 some to believe that these attacks are immune to DNSSEC
 protection of
 the cache.

 Cheers,
 Phil




 --

 Haya Shulman

 Technische Universität Darmstadt

 FB Informatik/EC SPRIDE

 Morewegstr. 30

 64293 Darmstadt

 Tel. +49 6151 16-75540

 www.ec-spride.de




 --

 Haya Shulman

 Technische Universität Darmstadt

 FB Informatik/EC SPRIDE

 Morewegstr. 30

 64293 Darmstadt

 Tel. +49 6151 16-75540

 www.ec-spride.de




-- 

Haya Shulman

Technische Universität Darmstadt

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Haya Shulman

 I do not see an answer to my intended question. Again, given inevitiably

limited real time, over-committed programmer and DNS adminstrator
hours, and limited money, should problems in DNSSEC implementations
and installations be addressed before or after your issues?



Some of our recommendations can be useful for security (also against
other attacks - see earlier email), and can assist with
interoperability problems, that may emerge during deployment of
DNSSEC.
For instance, our recommendations for IP layer, e.g., reducing IP
defragmentation cache size or randomising IP-ID, can be useful not
only against poisoning attacks (in particular, fragmentation has a
long history of exploits for denial/degradation of service attacks);
Port randomisation is also a useful feature, not only against cache
poisonig (see earlier email); and so on.

Should the people working on DNS implementations prioritize making
their DNSSEC code more robust and easier to use above or below
addressing your issues?


I do not think that there is a general answer to this question, as it
depends on the specific organisation/network.


Which should be built or fixed first, mechanisms such as auto-signing
that make DNSSEC easier to deploy and more robust (e.g. reducing
accidental signature expiration), or your cache pollution issues?

Should requests for proposals and requests for quotes rank DNSSEC
features including ease of DNSSEC use above or below fixes for your
cache pollution issues?

Should you spend most of your own time looking for improvements and
bugs in DNSSEC or looking for more ways to pollute DNS caches where
DNSSEC is not used?

Both are important: (1) disclosing attacks raises awareness to the
significance of systematic defenses, and motivates deployment thereof; it
also enables the networks to protect themselves in the meanwhile. I would
not be surprised if similar attacks were deployed in the Internet, without
anyone being aware of their existence.

(2) I also study deployment challenges and improvements (and not only
attacks).



I think that is neither a response to my claim, accurate, nor
relevant to what I understood of your claim about forwarders.

How can something that introduces a security vulnerability not always
be a bug in your or anyone's opinion?

---


Sure, permissive mode is an explicit feature. I believe a bug is
something that is not intentionally introduced (well, at least not
as a general rule).

---

If you meant instead to say that permissive verification is a less
important bug that other things, then how do you rank your cache
pollution issues against other bugs starting with not deploying DNSSEC?



---

I would hope that disclosure may help some organisations and networks
protect themselves. The other option is to be unaware of the
vulnerabilities (and their exploits).
Do you think vulnerabilities are better left for attackers to take care of?
 BTW, we withheld some of the works from publication, and tried to
coordinate disclosure.


Your work would be valuable if it helped pressure people to get
busy on DNSSEC.  However, instead of saying Use DNSSEC because
port randomization has these newly discovered weaknesses, you only
grudgingly and under pressure admit that DNSSEC even exists.

 Many networks cannot deploy DNSSEC overnight, due to different factors.
Port randomisation algorithms that were proposed have weaknesses, but
proper randomisation should solve these problems.

I was under pressure to catch a flight when I responded and forgot DNSSEC;
it is as dear to me as it is to you :-)


On Sun, Oct 20, 2013 at 10:42 PM, Haya Shulman haya.shul...@gmail.comwrote:

  In that case, on what should an organization spend time or money

 first, on DNSSEC or the recommendations in the mail message?  Would
 it be better if each of the recommendations in the mail message
 started with something like this?

 Deploy DNSSEC, and consider the follow to help protect cached

 data not yet protected with DNSSEC.



 It's a good point, thanks. I will rewrite the recommendations according to
 what is essential and also against which type of attack and to what network
 configuration it applies.


 That sounds like a more significant bug than port obscurity or

 randomization.  If it is a bug, which should be addressed first in
 that software or those installations, this DNSSEC bug or the
 recommendations in the mail message?  It it is a significant DNSSEC
 bug, it would be good if a future version of the mail message

 mentioned it.


 It is not always a bug imho. Some resolvers, e.g., unbound, explicitly
 allow such permissive modes of DNSSEC validation, others support this
 implicitly and the rest may simply be not configured properly.
 Permissive modes are typically used during the incremental deployment
 phases prior to full adoption, e.g., to see that DNSSEC works ok, and does
 not break anything.
 Permissive mode introduces a security vulnerability - since a resolver
 signals support of 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread David Conrad
On Oct 20, 2013, at 2:16 PM, Vernon Schryver v...@rhyolite.com wrote:
 Should the people working on DNS implementations prioritize making
 their DNSSEC code more robust and easier to use above or below
 addressing your issues?

I'd say below.

Resolver operators (hopefully) want to protect their caches.  DNSSEC will do 
that, but only if people are signing their zones. There are lots of external 
parties (e.g., registries, registrars, software developers, resolver operators, 
etc) to get DNSSEC deployed and there remains very little incentive for anyone 
to sign their zones, regardless of how robust and easy it might be made.

The alternative would be to disregard current and future cache poisoning 
attacks.  Pragmatically speaking, I personally think it highly questionable to 
ignore cache poisoning vulnerabilities because something which isn't yet 
deployed to 10% of the Internet will fix it.

This would be a bit like saying don't deploy RRL because BCP38 is the correct 
answer to the problem.

 Your work would be valuable if it helped pressure people to get busy on 
 DNSSEC.  

Seems to me the work they have done is valuable, regardless of DNSSEC.

Regards,
-drc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

  I do not see an answer to my intended question. Again, given inevitiably
 limited real time, over-committed programmer and DNS adminstrator
 hours, and limited money, should problems in DNSSEC implementations
 and installations be addressed before or after your issues?

 Some of our recommendations can be useful for security (also against
 other attacks - see earlier email), and can assist with
 interoperability problems, that may emerge during deployment of
 DNSSEC.
 For instance, our recommendations for IP layer, e.g., reducing IP
 defragmentation cache size or randomising IP-ID, can be useful not
 only against poisoning attacks (in particular, fragmentation has a
 long history of exploits for denial/degradation of service attacks);
 Port randomisation is also a useful feature, not only against cache
 poisonig (see earlier email); and so on.

Even if that were correct (it's partly wrong), it would not answer
my question.


 Should the people working on DNS implementations prioritize making
 their DNSSEC code more robust and easier to use above or below
 addressing your issues?

 I do not think that there is a general answer to this question, as it
 depends on the specific organisation/network.

I guess I'll play a little more by naming three specific outfits.

Should the organizations responsible for BIND, nsd, and unbound to
work on DNSSEC improvements and bug fixes before or after your
issues?  Please do not feel constrained to give a single general
answer for all three organizations but please give 3 specific answers
for those 3 specific organizations.


 Which should be built or fixed first, mechanisms such as auto-signing
 that make DNSSEC easier to deploy and more robust (e.g. reducing
 accidental signature expiration), or your cache pollution issues?

 Should requests for proposals and requests for quotes rank DNSSEC
 features including ease of DNSSEC use above or below fixes for your
 cache pollution issues?

 Should you spend most of your own time looking for improvements and
 bugs in DNSSEC or looking for more ways to pollute DNS caches where
 DNSSEC is not used?

 Both are important: (1) disclosing attacks raises awareness to the
 significance of systematic defenses, and motivates deployment thereof; it
 also enables the networks to protect themselves in the meanwhile. I would
 not be surprised if similar attacks were deployed in the Internet, without
 anyone being aware of their existence.

 (2) I also study deployment challenges and improvements (and not only
 attacks).

Do you think anyone sees that as responsive to any of my questions?



 I think that is neither a response to my claim, accurate, nor
 relevant to what I understood of your claim about forwarders.

 How can something that introduces a security vulnerability not always
 be a bug in your or anyone's opinion?

 Sure, permissive mode is an explicit feature. I believe a bug is
 something that is not intentionally introduced (well, at least not
 as a general rule).

On the contrary, in the real world, any and all computer sins of
commission and many sins of admission are bugs according to the
people who matter, users and customers.  The programmer, vendor, or
whatever can sometimes escape censure by pointing at a specification,
but even when it works, that tactic never wins respect, friends, or
more business, and it can lose old business.


 If you meant instead to say that permissive verification is a less
 important bug that other things, then how do you rank your cache
 pollution issues against other bugs starting with not deploying DNSSEC?

 I would hope that disclosure may help some organisations and networks
 protect themselves. The other option is to be unaware of the
 vulnerabilities (and their exploits).
 Do you think vulnerabilities are better left for attackers to take care of?

That implication that I have ever said or suggested that you should
not have published your findings is disingenuous and offensive.

My problem with your findings is that your are grossly overstating
their significance.  None of them will ever be seen in the wild.  As
As http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16 and
as I've said, showing the inevitiable weakness of port randomization
is good.


  BTW, we withheld some of the works from publication, and tried to
 coordinate disclosure.

That is the right thing to do except when exploits have already
been seen in the wild or are likely to be seen soon.  However, given
the practical certainty that none of your vulnerabilities will ever
be seen in the wild, no one should have complained if you had not.


 Your work would be valuable if it helped pressure people to get
 busy on DNSSEC.  However, instead of saying Use DNSSEC because
 port randomization has these newly discovered weaknesses, you only
 grudgingly and under pressure admit that DNSSEC even exists.

  Many networks cannot deploy DNSSEC overnight, due to different 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-20 Thread Vernon Schryver
 From: David Conrad d...@virtualized.org

  Should the people working on DNS implementations prioritize making
  their DNSSEC code more robust and easier to use above or below
  addressing your issues?
 
 I'd say below.
 
 Resolver operators (hopefully) want to protect their caches. DNSSEC =
 will do that, but only if people are signing their zones. There are lots =
 of external parties (e.g., registries, registrars, software developers, =
 resolver operators, etc) to get DNSSEC deployed and there remains very =
 little incentive for anyone to sign their zones, regardless of how =
 robust and easy it might be made.
 
 The alternative would be to disregard current and future cache poisoning =
 attacks. Pragmatically speaking, I personally think it highly =
 questionable to ignore cache poisoning vulnerabilities because something =
 which isn't yet deployed to 10% of the Internet will fix it.
 
 This would be a bit like saying don't deploy RRL because BCP38 is the =
 correct answer to the problem.

On the contrary, anyone who spends even one minute on RRL that
could be spent on BCP 38 should be...well, I can't say shot
because I oppose capital punishment.  RRL should be considered
only after everything possible has been done for BCP 38.

Similarly, only after there is nothing that you can do improve your
DNSSEC implementation should you consider improving your port
randomization.  I agree that port randomization should come before
a lot of other things, although that's not saying much because the
major DNS implementations are filled with things I would have vetoed
if I'd been king.

I think their work showing the weaknesses of port randomization in
theory and practice is important, because it shows that no security
should depend on adversaries being unable to inject packets into UDP
or TCP streams because ports are secret.  I strongly disagree with
Haya Shulman's words to Paul Vixie that seemed to say that their work
might fix other applications and protocols.  I think their work shows
that port randomization is like RRL, a lame kludge of a mess that is
better than nothing but not even a distant second choice to actually
fixing the problem.

I say only consider improving port randomization, because nothing
should be added to anything or even changed without clear and
significant benefits, especially in security related areas.  You've
been around long enough to remember many added nice features
caused big security problems.


Vernon Schryverv...@rhyolite.com

P.S. I'm licensed by http://ss.vix.su/~vixie/isc-tn-2012-1.txt and 
http://ss.vix.su/~vjs/rrlrpz.html to criticize RRL.

P.P.S. I've often heard Paul say much the same thing about RRL being
a bad idea except compared the alternative of ignoring the consequences
of everyone else's failure to deploy BCP 38.
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


[dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Haya Shulman
Hi All,

We (me and my phd advisor Prof Amir Herzberg) recently found a number of
new DNS vulnerabilities, which apply to patched and standard DNS resolvers,
and enable off-path attackers to foil [RFC5452] (and [RFC6056, RFC4697])
recommendations, allowing DNS cache poisoning attacks. The vulnerabilities
are not related to a specific DNS software or OS implementation.

Following some questions regarding which publication is related to what
vulnerability, for your convenience, please find a summary of our findings
and results below (concise conferences' publications are available on my
site  sites.google.com/site/hayashulman/publications - I will soon upload
full versions). Please feel free to email me if you have questions related
to these works.
We are also interested in understanding the constraints and challenges that
Internet operators and administrators are facing and therefore will
appreciate your comments/feedback.

---
Summary: we performed a study of DNS security (focusing on cache poisoning
attacks) in the following settings:
1. Resolver-behind-Upstream (applies to resolvers that use upstream
forwarders for their security against attacks).
2. Socket-overloading for port derandomisation (applies to network settings
where attacker has a `good` and `stable` Internet connection and exploits
side-channels in kernel handling of hardware interrupts on the resolver
side).
 3. Resolver-behind-NAT (applies to patched resolvers behind patched NAT
devices).
4. Second-fragment IP defragmentation cache poisoning (this attack was
discussed on this mailing list, and the idea is to replace the second
authentic fragment with a spoofed one).

---
[1, 2, 3] - present techniques to derandomise ports on systems that support
algorithms recommended in [RFC6056]. We also tested a number of popular DNS
checker systems, and their ability to detect the vulnerabilities.
[2, 3] - show how to perform IP address derandomisation against resolvers
conforming with recommendations in [RFC4697] (we then present applications
of this technique for NS pinning).
[4] - shows how to apply IP defragmentation cache poisoning to inject
content into DNS responses for DNS cache poisoning.

---
Details:

--- 1. Resolver-behind-Upstream ---
Resolver-behind-Upstream forwarder, is recommended by security experts and
ISPs as a secure configuration to prevent DoS attacks against proxy
resolvers (which typically have limited bandwidth), and to prevent Kaminsky
DNS cache poisoning.
The intuition is that DNS requests are never sent directly to the name
servers, and thus the proxy resolver (that is configured to use a secure
upstream resolver for its requests) is secure.

We present different techniques allowing off-path attackers to find the IP
address of proxy resolver (that uses an upstream forwarder for its DNS
requests) and then to discover ports, allocated by the proxy to its DNS
requests.

These attacks are very efficient in particular when fragmentation  (even of
a single byte) is possible (i.e., if the proxy and upstream do not use TCP
for communication). In contrast to 4, here we apply first fragment IP
defragmentation cache poisoning, to discover the port assigned by the proxy
to its requests to upstream forwarder. Surprisingly, we found that many
proxies rely on their security for an upstream forwarder and simply send
all requests from fixed, or sequentially incrementing, ports.

Recommendations:
1. randomise ports selected by the proxy resolver.
2. use TCP between proxy and upstream forwarder.

Published at ESORICS'13:
http://link.springer.com/chapter/10.1007/978-3-642-40203-6_13#page-1

--- 2. Socket-overloading for port derandomisation ---
In this work we present techniques to elicit side-channels enabling
off-path attackers to discover the ports assigned by resolvers that support
per-destination port allocation algorithms recommended in [RFC6056]. We
also show how to apply socket overloading for NS pinning against resolvers
compliant with [RFC4697].

The effectiveness and efficiency of socket-overloading based techniques
depends on the quality of network connectivity of the attacker and
proximity to the victim resolver, i.e., number of hops between the victim
and the attacker. In particular, since this attack requires bursts of
traffic, if the attacker does not have good connectivity, its attack may be
detected by some IDS. On the other hand, the attacker can significantly
increase its success probability (as well as reduce the volume of the
burst) by distributing the burst among a number of nodes that it controls.

To appear at ACSAC'13 (paper Socket Overloading for Fun and
Cache-Poisoning on my site).

Tested against Linux kernel 3.2 (with support for NAPI mechanism), and
Windows server 2008.

Recommendations: randomise ports.

--- 3. Resolved-behind-NAT ---
We showed techniques that use a user-space software (controlled by the
attacker) that can send DNS requests to the victim resolver, and enable an
off-path attacker to derandomise 

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread P Vixie
M. Shulman, your summary does not list dnssec as a solution to any of these 
vulnerabilities, can you explain why not? Vixie
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Phil Regnauld
P Vixie (paul) writes:
 M. Shulman, your summary does not list dnssec as a solution to any of these 
 vulnerabilities, can you explain why not? Vixie

I was wondering about that, and went to look at the abstracts:

http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16

Security of Patched DNS

[...]

We present countermeasures preventing our attacks; however, we believe
that our attacks provide additional motivation for adoption of DNSSEC
(or other MitM-secure defenses).

So at least this seems to be mentioned in the papers themselves (Id
didn't pay to find out).

But I agree that the summary would benefit from stating this, as it's
currently only way to to avoid poisoning. Not stating it could lead
some to believe that these attacks are immune to DNSSEC protection of
the cache.

Cheers,
Phil
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 We (me and my phd advisor Prof Amir Herzberg) recently found a number of
 new DNS vulnerabilities, which apply to patched and standard DNS resolvers,
 ...

 Recommendations:
 ...

The complete absense of any mention of DNSSEC among those recommendations
(or elsewhere) reads like an implicit claim that DNSSEC would not
help.  Even if that claim was not intended, would it be accurate?

Would DNSSEC make any of recommendations less necessary or perhaps
even moot?  If DNSSEC by itself would be effective against cache
poisoning, then isn't it among the recommendations, especially for
Resolver-behind-Upstream?  Why aren't efforts to protect port
randomization, hide hidden servers and so forth like trying to make
it safe to use .rhosts and /etc/hosts.equiv files by filtering ICMP
dedirects and IP source routing, and strengthening TCP initial sequence
numbers?

It's not that filtering ICMP redirects, etc. are wrong, but I think
today those things are used for availability instead of data integrity
(or authentication and authorization), and small leaks are not
always and everywhere seen as catastrophes.  In fact, haven't ICMP
redirects been reborn as fundamental parts of IPv6?


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Haya Shulman
You are absolutely right, thanks for pointing this out.
DNSSEC is the best solution to these (and other) vulnerabilities and
efforts should be focused on its (correct) adoption (see challenges here:
http://eprint.iacr.org/2013/254).
However, since partial DNSSEC deployment may introduce new vulnerabilities,
e.g., fragmentation-based attacks, the recommendations, that I wrote in an
earlier email, can be adopted in the short term to prevent attacks till
DNSSEC is fully deployed.


On Sat, Oct 19, 2013 at 5:53 PM, P Vixie p...@redbarn.org wrote:

 M. Shulman, your summary does not list dnssec as a solution to any of
 these vulnerabilities, can you explain why not? Vixie
 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.




-- 

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Morewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Haya Shulman
This is correct, the conclusion from our results (and mentioned in all our
papers on DNS security) is to deploy DNSSEC (fully and correctly). We are
proponents of cryptographic defenses, and I think that DNSSEC is the most
suitable (proposed and standardised) mechanism to protect DNS against cache
poisoning. Deployment of new Internet mechanisms is always challenging (and
the same applies to DNSSEC). Therefore, we recommend short term
countermeasures (against vulnerabilities that we found) and also
investigate mechanisms to facilitate deployment of DNSSEC.


On Sat, Oct 19, 2013 at 6:05 PM, Phil Regnauld regna...@nsrc.org wrote:

 P Vixie (paul) writes:
  M. Shulman, your summary does not list dnssec as a solution to any of
 these vulnerabilities, can you explain why not? Vixie

 I was wondering about that, and went to look at the abstracts:

 http://link.springer.com/chapter/10.1007/978-3-642-33167-1_16

 Security of Patched DNS

 [...]

 We present countermeasures preventing our attacks; however, we believe
 that our attacks provide additional motivation for adoption of DNSSEC
 (or other MitM-secure defenses).

 So at least this seems to be mentioned in the papers themselves (Id
 didn't pay to find out).

 But I agree that the summary would benefit from stating this, as
 it's
 currently only way to to avoid poisoning. Not stating it could lead
 some to believe that these attacks are immune to DNSSEC protection
 of
 the cache.

 Cheers,
 Phil




-- 

Haya Shulman

Technische Universität Darmstadt

FB Informatik/EC SPRIDE

Morewegstr. 30

64293 Darmstadt

Tel. +49 6151 16-75540

www.ec-spride.de
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2013-10-19 Thread Vernon Schryver
 From: Haya Shulman haya.shul...@gmail.com

 IMHO, DNSSEC is simply the natural defense against the attacks, which
 is why I did not explicitly mention it, but I definitely had it in
 mind :-)

In that case, on what should an organization spend time or money
first, on DNSSEC or the recommendations in the mail message?  Would
it be better if each of the recommendations in the mail message
started with something like this?

Deploy DNSSEC, and consider the follow to help protect cached
data not yet protected with DNSSEC.

 Regarding the proxy-behind-upstream: to prevent the attacks DNSSEC has
 to be deployed(and validated) on the proxy. Currently it seems that
 there are proxies that signal support of DNSSEC (via the DO bit), but
 do not validate responses, and validation is typically performed by
 the upstream forwarder.

That sounds like a more significant bug than port obscurity or
randomization.  If it is a bug, which should be addressed first in
that software or those installations, this DNSSEC bug or the
recommendations in the mail message?  It it is a significant DNSSEC
bug, it would be good if a future version of the mail message
mentioned it.


Vernon Schryverv...@rhyolite.com
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs