Re: [dns-operations] EDNS with IPv4 and IPv6 (DNSSEC or large answers)

2014-10-04 Thread Hannes Frederic Sowa
On Tue, Sep 23, 2014, at 23:41, Mark Andrews wrote:
 As for atomic fragments, it is a seperate issue out of control of
 the nameserver.

Because of a possible DoS vector atomic fragments will be deprecated
soon:
http://tools.ietf.org/html/draft-gont-6man-deprecate-atomfrag-generation-00

Bye,
Hannes
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] BIND performance difference between RHEL 6.4 and FreeBSD 7

2014-04-24 Thread Hannes Frederic Sowa
On Tue, Apr 22, 2014 at 02:40:08PM -0700, Shawn Zhou wrote:
 
 
 Our performance tests show that ISC BIND (authoritative only setup) doesn't 
 perform well on RHEL 6.4 in comparison with FreeBSD 7: bind_perf.png
  
bind_perf.png
 Shared with Dropbox  
 View on www.dropbox.com Preview by Yahoo  
 
 The Drop Rate is the ratio of the amount response never received and the 
 amount of requests were sent. BIND is configured with 24 worker threads and 
 24 UPD listener, same as the CPU threads we have. BIND process already has 
 '-20' nice priority set on RHEL and '20' on the FreeBSD host.The test hosts 
 running RHEL 6.4 and FreeBSD 7 are identical in term of hardware:
 2 x Xeon E5-2430, 24GB DDR3 RAM, 1Gb/s NIC. 
 
 We conducted the tests by having our load generators re-play BIND query logs 
 using our custom scripts and send the queries to the test server at a given 
 rate, say, 200,00 query per seconds. The queries are preloaded into the 
 memory so that's no overhead for our load generators to read queries from 
 disk while sending test queries.
 
 What we've observed that socket receiving queue (Recv-Q from netstat) drained 
 very fast on FreeBSD but got back up pretty fast on RHEL 6 when we ramped up 
 the test traffic.  With net.core.rmem_default set to 40MB, it only helps RHEL 
 to be able to handle 180,000qps before we start to see receive buffer 
 overruns again and drop rate increases linearly.

If packet drop happens in the host kernel you can easily check where this
happens:

dropwatch -l kas
start

Would be nice to know.

Greetings,

  Hannes

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2014-01-15 Thread Hannes Frederic Sowa
Hi!

On Wed, Jan 15, 2014 at 01:26:20PM -0800, Colm MacCárthaigh wrote:
 Unfortunately I can't share data, but I have looked at a lot of it. In
 general, I've seen TTLs to be very stable. Most ECMP is flow-hashed these
 days and so as long as the path is stable, the TTLs should be identical. If
 there's some kind of transition mid-datagram, the the TTLs may legitimately
 mismatch, but those events seem to be very rare.

Counterexample: Linux does not use flow-hased steered ECMP. You see the
effect on end-hosts because of the route lookup caching in the socket
(as long as it doesn't get invalidated or unconnected).

The problem is that as soon as such a knob is provided people could
generate DNS-blackholes (until timeout of resolver and retry with TCP,
maybe this could be sped up with icmp error messages).  Only a couple
of such non-flow-hased-based routed links would suffice to break the
internet for a lot of users. I am pretty sure people will enable this
knob as soon as it is provided and word is spread.

If we want to accept that we could just force DF-bit on all fragments
and ignore the users behind some specific minimal mtu. Would solve the
problem more elegantly with same consequences. And error handling with DF-bit
is better specified and handled by the kernel, thus more robust and better
debugable (in case UDP path mtu discovery is implemented on the OS). ;)

 netfilter would be fine, but it'd be nice to not incur any state cost
 beyond what the UDP re-assembly engine is keeping already.

netfilter reuses the core reassembly logic (at least in IPv4, not yet
for IPv6). As soon as netfilter is active, packets will get reassembled
by netfilter and passed up the network stack without going in core
fragmentation cache again. So the TTLs would be kept in the frag queues
and further fragments would indicate to hard match the TTL on further
appends.  So that would be no problem to do. I really doubt it is wise
to do so.

Greetings,

  Hannes  

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] summary of recent vulnerabilities in DNS security.

2014-01-15 Thread Hannes Frederic Sowa
On Wed, Jan 15, 2014 at 03:33:02PM -0800, Colm MacCárthaigh wrote:
 For DNS, we have the option to respond with a TC=1 response, so if I
 detected a datagram with suspicious or mismatching TTLs, TC=1 is a decent
 workaround. TCP is then much more robust against intermediary spoofing. I
 can't force the clients to use DF though.

That would need to be implemented as cmsg access ancillary data and cannot
be done as a netfilter module (unless the DNS packet generation is also
implemented as netfilter target). Because this touches core code, this
really needs strong arguments to get accepted. Maybe this can be done
as part of the socket fragmentation notification work. I'll have a look
but want to think about how easy this can get circumvented first. Maybe
you already thought about that?

Thanks,

  Hannes

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs