[dns-operations] The biennial APNIC survey is open. https://survey.apnic.net/

2022-06-12 Thread George Michaelson
The biennial APNIC survey is open. https://survey.apnic.net/

Because amongst other things our share of reverse-DNS has a strong
dependency on APNIC services across the board, I think it would be net
beneficial if some of you completed this to help us understand what
you need from us, and how to deliver it.

There are also prizes.

-George
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations


Re: [dns-operations] How should work name resolution on a modern system?

2022-06-10 Thread George Michaelson via dns-operations
--- Begin Message ---
I am very glad somebody is asking these questions. They're food for thought
and go beyond strict DNS/53 (not that it IS 53 any more, so I mean "old
school dns as protocols") to the wider question  of name to locator mapping
in a multiple service context, and the fracture of a unitary namespace into
name spaces, subject to views, and order of enquiry issues.

These are great questions. I wish I had answers. I hope I see some!

G

On Sat, 11 Jun 2022, 5:50 am Petr Menšík,  wrote:

> Hello DNS experts,
>
> I were thinking about requirements for future name resolution primarily
> on Linux desktop and servers. But if anyone could comment other systems
> and their design, I would be glad too. I would like to formulate
> requirements first and find a good way to implement them later.
>
> We have two ways used to resolve names to ip addresses on GNU/Linux.
>
> - first is libc interface getaddrinfo() provided by nss plugins. Names
> can be resolved also by different protocols than just DNS. A good
> examples might be MDNS (RFC 6762), LLMNR (RFC 4795) or Samba
> (nmblookup). Standardized calls provide only blocking resolution interface.
>
> * Asynchronous interface does not exist in useful form. It is easy to
> handle multiple connections in single thread, but multiple resolutions
> in single thread are not supported. nss plugins are simple to write, but
> hard to use in responsibe program. Should that be changed?
>
> * MDNS usually uses names under .local domain. What should be preferred
> order of single label names, like 'router.'? Should be LLMNR tried
> first, samba first or DNS search applied first? Should it avoid reaching
> DNS when search domain is not set?
>
> - primary interest for us is DNS protocol. On Unix systems it specifies
> nameservers to use in /etc/resolv.conf also with some options. We would
> like to offer DNS cache installed on local machine, which should
> increase speed of repeatedly resolved names.
>
> * I would like to have support for multiple interfaces and redirection
> of names subtree to local network interfaces servers. For example
> 'home.arpa' redirected to local router at home, but example.com
> redirected to VPN connection. I think RFC 8801 and RFC 7556 specify
> standardized way to list interface specific domains. Existing
> implementations misuse RFC 2937 for a source of such list now. Something
> like this is implemented by systemd-resolved on Ubuntu and Fedora
> systems. But it introduced couple of new issues. Is something similar
> implemented on end user machines? I think laptop and phones are typical
> devices with multiple interfaces, where it would make sense.
>
> My questions:
>
> - how should single label names be handled?
> -- is domain (opt. 15) and search (opt. 117) from DHCP already dead?
> Should they be completely avoided even in trusted networks?
> -- in which order should be resolution tried? Should machine cache block
> queries to single label hostnames not expanded to FQDN on DNS protocol?
> -- I have seen usage of search domains on cloud technologies. Is there
> common example what they are used for? Do we need ndots option with
> value different from 1?
>
> - should we expect DNSSEC capabilities on every machine?
> -- should we even enable DNSSEC validation on every machine by default?
> When it would be good idea and when it wouldn't?
>
> - should asynchronous API be prepared for common name to addresses and
> vice versa? One which would support both local network resolution and
> unicast DNS in easy to use way? Usable even in GUI applications without
> common hassle with worker threads?
>
> If there is documentation for name subtree mapping to interface servers
> on different systems, I would be glad if you could share links to it. If
> we should improve current situation, I would like to first gather
> expected requirements for such system. Is there some summary already?
>
> Thank you for any feedback!
>
> Best Regards,
> Petr
>
> --
> Petr Menšík
> Software Engineer, RHEL
> Red Hat, http://www.redhat.com/
> PGP: DFCF908DB7C87E8E529925BC4931CA5B6C9FC5CB
>
> ___
> dns-operations mailing list
> dns-operations@lists.dns-oarc.net
> https://lists.dns-oarc.net/mailman/listinfo/dns-operations
>
--- End Message ---
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations


Re: [dns-operations] [Ext] K-root in CN leaking outside of CN

2021-11-07 Thread George Michaelson
I helped deploy the unit. Great war stories about this. I think my
favourite was the guys charcoaling lunch in the machineroom next door
in the DC, using a wastepaper basket as a cooker, sitting on a
salvaged sofa. That, and having rack power die when the room lights
were turned out.

On Mon, Nov 8, 2021 at 9:53 AM Manu Bretelle  wrote:
>
>
>
> On Sun, Nov 7, 2021 at 1:48 AM Ray Bellis  wrote:
>>
>>
>>
>> On 07/11/2021 09:28, Ray Bellis wrote:
>>
>> > There most certainly were - a similar leak/poison pattern was
>> > detected from an I root node hosted in China in March 2010.
>>
>> and checking our own records, we've had F root servers in China since at
>> least 2006.
>>
>> However we announce our Anycast prefixes "NO_EXPORT" and make it very
>> clear that our routes must not propagate beyond the border.
>
>
> Thanks Ray, that’s useful info.
>
> My original Google searches seemed to indicate that the first root in CN were 
> quite recent but if did not dig enough.
> A colleague pointed me to
> https://archive.nanog.org/meetings/nanog53/presentations/Tuesday/Losher.pdf /
> https://bgpmon.net/f-root-dns-server-moved-to-beijing/
>
> Which was about F hosted in China leaking out back in Oct 2011, but Nanog 
> slides indicate that answers were not rewritten in this case.
>
> Manu
>>
>>
>>
>> Ray
>> ___
>> dns-operations mailing list
>> dns-operations@lists.dns-oarc.net
>> https://lists.dns-oarc.net/mailman/listinfo/dns-operations
>
> ___
> dns-operations mailing list
> dns-operations@lists.dns-oarc.net
> https://lists.dns-oarc.net/mailman/listinfo/dns-operations

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations


Re: [dns-operations] [Ext] Obsoleting 1024-bit RSA ZSKs (move to 1280 or algorithm 13)

2021-10-21 Thread George Michaelson
I would be concerned that the language which makes the recommendation
HAS to also note the operational problems. You alluded to the UDP
packetsize problem. And implicitly the V6 fragmentation problem. What
about the functional limitations of the HSM and associated signing
hardware? I checked, and the units we operate (for other purposes than
DNSSEC) don't support RSA1280. They do RSA1024 or  RSA2048. This is
analogous to the recommendation I frequently make casually, to stop
using RSA and move to the shorter cryptographic signature algorithms
to bypass the size problem: They are slower, and they aren't supported
by some hardware cryptographic modules.

Even without moving algorithm, Signing gets slower as a function of
keysize as well as time to brute force. So, there is a loss of
"volume" of signing events through the system overall. Time to resign
zones can change. Maybe this alters some operational boundary limits?
(from what I can see, 1024 -> 1280 would incur 5x slowdown.  1024-2048
would be 10-20x slowdown. RSA to elliptic curve could be 50x or worse
slowdown)

If the case for "bigger" is weak, then if the consequences of bigger
are operational risks, maybe bigger isn't better, if the TTL bound
life, is less than the brute force risk?

A totally fictitious example. but .. lets pretend somebody has locked
in to a hardware TPM, and it simply won't do the recommended algorithm
but would power on with 1024 until the cows come home? If the TTL was
kept within bounds, if resign could be done in a 10 day cycle rather
than a 20 day cycle (for instance) I don't see why the algorithm
change is the best choice.

cheers

-George

On Fri, Oct 22, 2021 at 11:46 AM Brian Dickson
 wrote:
>
>
>
> On Wed, Oct 20, 2021 at 10:22 AM Paul Hoffman  wrote:
>>
>> On Oct 20, 2021, at 9:29 AM, Viktor Dukhovni  wrote:
>>
>> > I'd like to encourage implementations to change the default RSA key size
>> > for ZSKs from 1024 to 1280 (if sticking with RSA, or the user elects RSA).
>>
>> This misstates the value of breaking ZSKs. Once a KSK is broken, the 
>> attacker can impersonate the zone only as long as the impersonation is not 
>> noticed. Once it is noticed, any sane zone owner will immediately change the 
>> ZSK again, thus greatly limiting the time that the attacker has.
>
>
> This presupposes what the ZSKs are signing, and what the attacker does while 
> that ZSK has not been replaced.
>
> For example, if the zone in question is a TLD or eTLD, then the records 
> signed by the ZSK would include almost exclusively DS records.
> DS records do change occasionally, so noticing a changed DS with valid 
> signature is unlikely for anyone other than the operator of the corresponding 
> delegated zone.
> An attacker using such a substituted DS record can basically spoof anything 
> they want in the delegated zone, assuming they are in a position to do that 
> spoofing.
> And how long those results are cached is controlled only by the resolver 
> implementation and operator configuration, and the attacker.
>
> So, the timing is not the duration until the attack is noticed 
> (NOTICE_DELAY), it is the range MIN_TTL to MIN_TTL+NOTICE_DELAY (where 
> MIN_TTL is min(configured_TTL_limit, attacker_supplied_TTL)).
>
> The ability of the operator of the delegated zone to intervene with the 
> resolver operator is not predictable, as it depends on what relationship, if 
> any, the two parties have, and how successful the delegated zone operator is 
> in convincing the resolver operator that the cached records need to be purged.
>
> Stronger ZSKs at TLDs is warranted even if the incremental improvement is 
> less than what cryptographers consider interesting, IMNSHO. It's not an 
> all-or-nothing thing (jump by 32 bits or don't change), it's a question of 
> what reasonable granularity should be considered in increments of bits for 
> RSA keys. More of those increments is better, but at least 1 such increment 
> should be strongly encouraged.
>
> I think Viktor's analysis justifies the suggestion of 256 bits (of RSA) as 
> the granularity, and thus recommending whatever in the series 1280, 1576, 
> 1832, 2048 the TLD operator is comfortable with, with recommendations against 
> going too big (and thus tripping over the UDP-TCP boundary).
>
>>
>> In summary, it is fine to propose that software default to issuing larger 
>> RSA keys for ZSKs, but not with an analysis that makes a lot of unstated 
>> guesses. Instead, it is fine to say "make them as large as possible without 
>> causing automatically needing TCP, and ECDSA P256 is a great choice at a 
>> much smaller key size".
>
>
> I'm fine with adding those to the recommendations (i.e. good guidance for the 
> rationale for picking ZSK size and/or algorithm), with the added emphasis on 
> not doing nothing.
>
> Brian
> ___
> dns-operations mailing list
> dns-operations@lists.dns-oarc.net
> https://lists.dns-oarc.net/mailman/listinfo/dns-operations

Re: [dns-operations] root? we don't need no stinkin' root!

2019-11-26 Thread George Michaelson
I tend to functional questions in these matters. This is not a
symmetric pair, but they go to different sides of the problem

1) what will happen if we imagine these queries not being answered? A
hypothetical (*and, its not zero cost*) front-end process which drops
them

2) what is the consequence of continuing to answer these queries?

Noting 1) is not trivial, I believe if these queries were not
answered, there would be short-term downsides, but long-term upsides.
The problem would (I believe) go away.

Noting 2) is the "do nothing" option. The only clear consequence is
that we're incurring cost, in root instantiations. It is possible if
we go with run-root-on-local/loopback we smear the cost, but do we
reduce it?

-G

On Wed, Nov 27, 2019 at 1:00 PM Mark Allman  wrote:
>
>
> Hi Paul!
>
> > The biggest problem I see here is the legacy/long-tail problem. As
> > of a few years ago, I bumped into BIND 4 servers still
> > active. Wouldn't be shocked to hear they are still being used.
> >
> > IPv4 reachable traditional DNS servers for some tiny group of
> > antique folks will be needed for years, even if we get 99+% of the
> > world to some new system.
>
> I wonder if we're ever allowed to just decide this sort of thing is
> ridiculous old shit and for lots of reasons we can and should just
> garbage collect it away.
>
> > Doesn't mean we shouldn't be thinking about a better way to do it
> > for that 99% though.
>
> Is it better if we only get to 99%?
>
> To me, this whole notion is that we can in fact get rid of this
> giant network service.  If we don't get rid of it then what is the
> incentive to move one's own resolver away from using the root
> nameservers?  I don't have any heartburn with RFC 7706.  But, it is
> a quite minor optimization in the general case.  It may well be
> important in some corner cases, but in general I don't think running
> a local root nameserver helps all that much.
>
> Maybe 99% lets us draw down the size of the root infrastructure...I
> dunno.  But, if we don't say something like "it's going to go away"
> then I am not sure resolvers will move away from it.
>
> allman
>
>
> --
> https://www.icir.org/mallman/
> @mallman_icsi
> ___
> dns-operations mailing list
> dns-operations@lists.dns-oarc.net
> https://lists.dns-oarc.net/mailman/listinfo/dns-operations
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations


Re: [dns-operations] DNSSEC issue - why?

2015-06-09 Thread George Michaelson
On 9 June 2015 at 14:53, Edward Lewis edward.le...@icann.org wrote:

 On 6/9/15, 7:42, Mark Andrews ma...@isc.org wrote:
 ents that can be referenced
 separately (like in RFP's and contracts).  I found that trying to make
 code prefer newer technologies over old by fiat seems to backfire (like
 the way DNS used to prefer v6 over v4 and now seems to have reversed,
 looking at some observed behavoral studies).


interesting this crops up, and in a thread with Mark. I am told he recently
confirmed that there is no systematic deliberate biassing towards V4 in the
code: its just shorter RTT selection.

I wonder if there is a non-intentional bias against V6 eg the order the
calls are made, or a lazy evaluated IF statement logic, because we see
overwhelmingly more V4 than V6 on dual-stack NS with no cached state.

-G
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Postures was Re: Stunning security discovery: AXFR may leak information

2015-04-15 Thread George Michaelson
I find the question: if you had an FTP fetch of the zone, would you
feel comfortable making that available for anonymous FTP a useful
question.

In reverse, we have the entire zonestate as FTP files. publicly
visible. Signed in PGP. And we have whois, with varying degrees of
throttle, for operational stability reasons more than anything else.

If we got swamped on FTP, I wouldn't be happy, but thats an
operational issue about TCP cost and data cost. Not about the zone
contents per se.

I'm happy in reverse, it makes sense to know numbers are numbers, they
have a sequence, its not that much less informative than other
published information about who-has-what

So on that basis: the FTP rule passes: we have open FTP, why would we
block AXFR?

-G

On 15 April 2015 at 13:26, Edward Lewis edward.le...@icann.org wrote:
 John Crain alluded to the point I want to reinforce here.  There are many
 different operational postures.  It's tempting to see a situation as it
 applies to just one.  The three snips below illustrate common environments
 I've run across - TLD (/registration zones), remote debugging
 (/third-party management), and enterprise.

 When I think of generally I assume the latter environment.  By
 comparison, there are very few operations that handle TLD (and root) zone.

 The remote debugging is an interesting environment.  On the one hand it is
 benign, coaching and basically freely helping others.  But the technical
 footprint of it is not far removed from outside surveillance (the NSA or
 corporate spying), with the real difference locked into intent.  And
 sometimes even benign outside help is considered an intrusion.

 As far as generally unwise - I am not the kind who likes loose ends.  By
 analogy, I see opening up AXFR on serves like walking with my shoes
 untied.  It's convenient (to not have to bend over and tie them) but if I
 step on one end I trip over.  Usually, my stance is wide enough that I
 don't trip.  The other concern is getting the laces wet in puddles, so I
 pull them in. (Yes, it is disturbing I've actually thought about this.)
 And worse yet, when I do this, my wife will frown at me.  I.e., once I
 mitigate the risks of tripping, stepping in puddles, and the scorn of my
 wife, it's fine.  If I don't consider these risk, I've been unwise.

 On 4/14/15, 18:58, Patrik Fältström p...@frobbit.se wrote:

I see personally quite a number of registries that are nervous about
XFR (or release of the zone in one way or another)

 On 4/14/15, 19:29, Mark Andrews ma...@isc.org wrote:

I, and I know others, have been able to debug DNS problems reported
on bind-users because we could see the full zone contents which
would have been harder or perhaps impossible to solve otherwise.


 On 4/14/15, 16:31, Michael Sinatra mich...@brokendns.net wrote:

The real reason I see for restricting AXFR is to preserve resources on
the server.  This is less of an issue now than it was in the BIND 4 days


___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] DNS Flush Protocol

2015-03-27 Thread George Michaelson
OK. thats a good motivation. Nicely stated.

Models based on in-band proof(s) of possession might then in some
sense, be better. While I hate meta-protocol usage, since we don't
have a cc channel that zone owners share with resolver owners, it
might be a tool in the locker.

How do you feel about state in the resolver to rendesvous on? Because
if we can do DNS 'query knocking' with held state, we can signal both
intentionality, and proof of possession. Obvious DoS risk of making a
resolver hold state but its probably no worse than the Amp Attack
risks.

Or if we have held-open session, then sequences of queries can be more
meaningful. I connect, I prove something doesn't exist with zero TTL,
I perform state change in the zone and re-query which shows you I
effected change for a prior query..

-G




On 27 March 2015 at 15:08, Paul Vixie p...@redbarn.org wrote:


 George Michaelson wrote:
 I would agree that assumptions are a road to perdition.

 But the model of concentration of eyeballs through resolvers is not
 new. So, whilst I agree in *principle* I think it bears thinking
 about: do you actually really expect a disruptive (sea)change  here?

 yes. or i wouldn't have worked on RPZ. the DNS resolution path is a huge
 component of internet autonomy, and it is under powerful attack by both
 corporations and governments around the world, for censorship,
 surveillance, and commerce purposes. to regain control of their own
 internet experience and to protect their privacy against upstream
 wiretapping, many enterprises of all sizes and many power users are
 going to move back to a private resolver model. we should do nothing in
 this WG that makes that movement less attractive, such as creating a DNS
 cache purge model that requires registration, subscription, or a
 clearinghouse.

 --
 Paul Vixie
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] DNS Flush Protocol

2015-03-27 Thread George Michaelson
I would agree that assumptions are a road to perdition.

But the model of concentration of eyeballs through resolvers is not
new. So, whilst I agree in *principle* I think it bears thinking
about: do you actually really expect a disruptive (sea)change  here?

I mean, I think its more likely we get a sea-change in the signed root
outcomes, than less people use 8.8.8.8 and 4.4.4.4 personally. Or
Comcast, given their centrality in current (and forseeable future)
market share now they're getting the eyes behind TW. Or China's
concentration of views behind 3-4 carriers.

So yes. But then again.. Perhaps.. No.

On 27 March 2015 at 14:16, Paul Vixie p...@redbarn.org wrote:
 see also:

 http://www.techrepublic.com/blog/data-center/opendns-and-neustars-real-time-directory-aim-to-speed-dns-update-times/
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] AWS footnote: DNS firewall rules are UDP only

2015-01-28 Thread George Michaelson
I entirely agree. This is a point-specific issue.

There are lots of 53 stupidities, but this is one which has a single locus
of control which can be viewed as 'tractable'

On 29 January 2015 at 10:09, Paul Hoffman paul.hoff...@vpnc.org wrote:

 Are there any Route 53 people on this list? If so, this should be fixed
 ASAP.

 --Paul Hoffman

  On Jan 28, 2015, at 11:28 AM, Fred Morris m3...@m3047.net wrote:
 
  I just noticed that when configuring firewall rules for an AWS instance,
  if DNS is chosen then the (only) protocol automagically filled in is
  UDP.
 
  To get TCP, you have to create a custom TCP rule.
 
  When you save, the UDP one gets saved as DNS, the TCP one stays custom
  TCP rule.

 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Assuring the contents of the root zone

2014-12-01 Thread George Michaelson
Here is a strawman, to try and understand the discussion.

If we imagine some datastream which is the result of an AXFR or HTTP
request.

 cmd | tr 'AZ' 'az'| sort -u | checker

this takes the stream, does LWSP replacement, and sorts the lines
alphabetically and generates eg SHA256

the tr phase is just for example. presumably a more complex set of rules
are required to DeMangLE the case conversion and punycode but the sense is,
that we have a deterministic state of any label in the zone and its
attributes as an encoding.

The sort phase generates a single understood (POSIX sort) order of bytes.
These can then be compared.

Why is this worse than eg an RR by RR comparison, walking the NSEC chains?
What I like about it, is that its applicable to being given the data OOB.
if you have what is a putative zone, then you can apply this logic, and
determine if the zone matches what is published elsewhere as a canonical
state of the zone.

The RR by RR and NSEC walk feels like a DNS experts approach. Not a
systems/generic approach.

-G

On 2 December 2014 at 11:29, Paul Vixie p...@redbarn.org wrote:



Paul Hoffman paul.hoff...@vpnc.org
 Monday, December 01, 2014 3:48 PM
   People have asked for two things:

 1) Getting the root zone by means other than AXFR, such as by HTTP

 2) Being sure that they got the exact root zone, including all of the glue
 records


 i think you meant zone not root zone here.


 A signed hash meets (2) regardless of how the zone was transmitted.


 not inevitably. the verification tool would be new logic, either built
 into the secondary name server, or as an outboard tool available to the
 transfer mechanism. when i compare the complexity-cost of that tool to the
 contents of the ftp://ftp.internic.net/domain
 ftp://ftp.internic.net/domain directory, i see that existing tools
 whose complexity-cost i already pay would work just fine. (those being pgp
 and md5sum). so, a detached signature can in some cases meet (2) far more
 easily than an in-band signature.

 it's also the case that rsync and similar tools (and AXFR) use TCP which
 most of us consider reliable even though its checksums aren't nearly as
 strong as SCTP's. therefore your problem statement being sure they got the
 exact right zone would have to refer to an MiTM, possibly inside the
 secondary server (if the zone receiver is a tertiary), or possibly on-path.
 in either case, to frustrate the MiTM, the proposed in-band signature would
 have to be DNSSEC based.

 and there is already an in-band DNSSEC-based zone identity/coherency test
 -- zone walking. why would we add another way to do the same thing we could
 do with existing DNSSEC data?


 ...
 Adding a record that says here is a hash of this zone, and adding an
 RRSIG for that record, is the simplest solution. There are other solutions
 that are exactly as secure; however, they are all more complex, and some
 involve using the zone signing key for signing something other than the
 contents of an RRSIG.

 i think walking the existing zone and verifying that there are no records
 between the nsecs and that every signature is valid and that the nsec chain
 ends at the apex, is simpler.

 vixie

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Assuring the contents of the root zone

2014-12-01 Thread George Michaelson
Its not designed to handle dynamic updates. Its designed to handle being
given, or accessing an entire zone state, and having a canonicalization
method which can be applied by anyone, using POSIX tools to determine if
its correct and complete

On 2 December 2014 at 15:38, Doug Barton do...@dougbarton.us wrote:

 George,

 It's hard for me to see how this would easily handle dynamic updates.

 Doug


 On 12/1/14 5:56 PM, George Michaelson wrote:
  Here is a strawman, to try and understand the discussion.
 
  If we imagine some datastream which is the result of an AXFR or HTTP
  request.
 
cmd | tr 'AZ' 'az'| sort -u | checker
 
  this takes the stream, does LWSP replacement, and sorts the lines
  alphabetically and generates eg SHA256
 
  the tr phase is just for example. presumably a more complex set of rules
  are required to DeMangLE the case conversion and punycode but the sense
  is, that we have a deterministic state of any label in the zone and its
  attributes as an encoding.
 
  The sort phase generates a single understood (POSIX sort) order of
  bytes. These can then be compared.
 
  Why is this worse than eg an RR by RR comparison, walking the NSEC
  chains? What I like about it, is that its applicable to being given the
  data OOB. if you have what is a putative zone, then you can apply this
  logic, and determine if the zone matches what is published elsewhere as
  a canonical state of the zone.
 
  The RR by RR and NSEC walk feels like a DNS experts approach. Not a
  systems/generic approach.
 
  -G

 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Assuring the contents of the root zone

2014-12-01 Thread George Michaelson
I think the use of *must* here is non-normative. You make a strong case
that a canonicalization must understand dynamic update. But you also chose
to ignore a huge world of context where people are presented with zones as
a fait accompli. Not as participants in port 53, but as files.

I think we're silly to exclude mechanisms which are understandable by
anyone, over what are (for much of their life) represented as files.

There is a tool in bind which reads a .jnl. So, if I take the outcome of a
dynamic update, secure it in a transactionally complete .jnl, and then
apply the tool.. I have a file of a zone state, and a given point in time,
for a given serial.

At which point, I can canonicalize it, and apply checks against a published
statement of the zones integrity.

I don't want to exhaust anyones patience. I've said my bit, I am content if
you have some closing last word on this,

I won't post any more on this idea just now. I can tell that I am swimming
against the tide.

On 2 December 2014 at 16:13, Paul Vixie p...@redbarn.org wrote:



 George Michaelson wrote:
  Its not designed to handle dynamic updates. Its designed to handle
  being given, or accessing an entire zone state, and having a
  canonicalization method which can be applied by anyone, using POSIX
  tools to determine if its correct and complete

 george, dns is dynamic now. a signature method must address the update
 case. here's what i wrote in response to paul-h:

  i'm imagining a stream cipher that begins as the H(K,zone) and then is
  updated to be H(K,H_old,delta) for each change to the zone, which
  would have to be calculated by the responder in the case of UPDATE,
  but could then be issued as a succession of new zone signature RR's
  during IXFR. the zone signature RR would have to be like SOA,
  there-can-be-only-one, so what might look like a set of them in an
  IXFR, is really a bunch of changes to the one-and-only. ...

 --
 Paul Vixie

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] cool idea regarding root zone inviolability

2014-11-27 Thread George Michaelson
If somebody said to me:

 lets have a canonicalize() function which makes a deterministic
byte-stream of the state of a zone, and then calculate a checksum over it

I'd struggle to say that was a bad idea.

If you have a transform which takes updates in any kind, AXFR or IXFR and
can then re-canonicalize and test the zone state against a known published
record of the zone state, whats the downside?

I understand there is an element of redundency in the scheme, against what
DNSSEC can alteady do at the RR level, but I think from whats been said,
and what is being said about schemes to provide for re-distribution of the
root, this makes a lot of sense.

The history only informs the present, it doesn't have to determine it. The
statement about 'learning from history, doomed to repeat it' is not meant
to say you cannot re-try things previously considered and rejected. It
means you need to be aware of them.

-G

On 28 November 2014 at 07:18, Edward Lewis edward.le...@icann.org wrote:

  After reading on…

  I think the rationale of killing SIG(AXFR) was that DNSSEC is there to
 protect the relying party and not the manager of the zone.  I.e., a relying
 party only cared about the data it received pertinent to the query it
 issued.  This made the building the chain of trust as efficiently as
 possible paramount.  Forking into zone replication was a design distraction.

  That’s not the reason SIG(AXFR) failed, that’s the reason we didn’t try
 harder to accomplish it.  DNSSEC did not exist to make managing DNS
 better[0] (I.e., protect against zone truncation), so taking time to do
 that was hindering the primary purpose of answering questions with proof of
 authenticity and integrity.

  [0] Joke and snicker if you must.  Yes DNSSEC today makes running
 today’s DNS harder, keep in mind that the state of the system in the 90’s
 was so bad that it would not have survived with the major rewrites DNSSEC
 development caused.  DNSSEC didn’t have a real good foundation to build
 upon.

   On 11/27/14, 17:48, Warren Kumari war...@kumari.net wrote:



 On Thursday, November 27, 2014, Francisco Obispo fobi...@uniregistry.link
 wrote:

  +1

  And if someone is already serving the root zone, they can always modify
 the server to return AA.

  I'm also wondering about the use case.


  See above - this has *nothing* to do with setting or not setting AA.
 This simply allows the entity serving a zone to confirm that they have a
 complete, uncorrupt, and untampered copy of the zone. Think of it as a
 cryptographic checksum if you like.
 Before serving a zone (as a master or slave) I'd like to know it is
 correct...

  W



  Francisco Obispo

 On Nov 27, 2014, at 1:55 PM, Paul Vixie p...@redbarn.org wrote:



   postbox-contact.jpg
  Warren Kumari
 Thursday, November 27, 2014 1:11 PM
  ... and Mark Andrews, Paul Hofmann, Paul Wouters, myself and a few
 others (who I embarrassing enough have forgotten) are planning on writing a
 zone signature draft (I have an initial version in an edit buffet). The
 50,000 meter view is:
 Sort all the records in canonical order (including glue)
 Cryptographicly sign this
 Stuff the signature in a record

  This allows you to verify that you have the full and complete zone
 (.de...) and that it didn't get corrupted in transfer.
 This solves a different, but related issue.


 would this draft change the setting of the AA bit on an secondary
 server's responses, or make it unwilling to answer under some conditions?
 right now there is no dependency, AA is always set. but if we're going to
 make it conditional, then it should be conditioned on the signatures
 matching all the way up-chain to a trust anchor, which would require an
 authority server to also contain a validator and be able to make iterative
 queries. so, i wonder about the use case for your draft.

 --
 Paul Vixie

  ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs



 --
 I don't think the execution is relevant when it was obviously a bad idea
 in the first place.
 This is like putting rabid weasels in your pants, and later expressing
 regret at having chosen those particular rabid weasels and that pair of
 pants.
---maf


___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Curious use of cname

2014-08-06 Thread George Michaelson
we all said symlinks were a bad idea when Berkeley did them. We also all
use them. The properties OF the symlink matter more than people realize,
because 99% of the time, they care about the properties of the object
pointed to by the symlink.

when Sun added ${symbolic} expansion on the fly to symlinks we all said it
was a bad idea.. I dont think many of us use that any more much.

oh sorry: did I say symlink? I meant CNAME. Morally, the DNSSEC sigs over
the CNAME are like the properties of the symlink. All the rest is about the
target.




On Thu, Aug 7, 2014 at 10:06 AM, Andrew Sullivan a...@anvilwalrusden.com
wrote:

 On Thu, Aug 07, 2014 at 07:51:53AM +1000, Mark Andrews wrote:
  Those with developers that don't read RFC 1034 which tried to prevent
  this from happening.

 You're probably right.  But of course, RFC 1034 was written a number
 of years ago, and some of the protocol-specification language that
 later became well-understood isn't used in it.  In particular,

  RR.  If a CNAME RR is present at a node, no other data should be
  present; this ensures that the data for a canonical name and its aliases
  cannot be different.

 this makes it sound like nothing at a CNAME but a CNAME is a good
 idea instead of if you have a CNAME, that means by definition
 nothing else can be there.  To a naïve reader, the text above might
 read as, You shouldn't do this, but you could.  But it'd have a bad
 consequence, and you don't want that, right?  What it should say, of
 course, is more like, CNAME just means that the name you looked up is
 actually some other name, therefore there MUST be no other data at the
 owner name of a CNAME.  Something like that.

 I've talked to people who've been facile with the DNS for a number of
 years, who didn't get that this wasn't some arbitrary rule, but was
 the very meaning of canonical name.  If you explain it, the lights
 always go on.  But RFC 1034 does a poor job of explaining it.

 Best regards,

 A

 --
 Andrew Sullivan
 a...@anvilwalrusden.com
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Trustworthiness of PTR record targets

2014-03-04 Thread George Michaelson
PTR records can exist in any zone. They matter when they lie under
in-addr.arpa and ip6.arpa because gethostbyaddr() roots queries in that
name path. But, lets be clear, you can jam a PTR into any place you like.
its just an RR.

under .ARPA, The zones which administer PTR records are strongly aligned by
dot-breaks in IPv4 and IPv6 to octet and nibble boundaries. the actual
zone-cut point varies, but they have a strong alignment which is
neccessarily constrained to the octet/nibble boundaries. IN Ipv4 its /8
aligned, in IPv6 its a mix of older /24 and /12 delegations to the RIR.

For those levels delegated by IANA to the RIR, the boundaries are well
understood and the DNSSEC signatures over the delegations understood.

If you go one level lower, the dot enforced boundaries vest into the
address holder, and again, DNSSEC could make a strong trust over that
binding. /16 and /24 delegations are put directly into each /8 zonefile,
but no /24 should be there, if the parent /16 exists. And likewise in IPv6.
We (the RIR) try very hard not to admit delegations which 'reach over' the
holder at a higher level.

But once you get deeper, we've lost a sense of public review and public
administration: its a single locus of control inside an address holding
entity, and how accurately they track the specific PTR binding is unclear,
and unspecified. There is no control. A bad actor can say that any given IP
address binds to any name. Its not constrained.


On Tue, Mar 4, 2014 at 10:20 AM, Jim Reid j...@rfc1035.com wrote:

 On 3 Mar 2014, at 17:26, Stephen Malone stephen.mal...@microsoft.com
 wrote:

  1.   In general, can I trust PTR records? Is ownership of the target
 domain validated at setup time by ISPs, and if yes, how is this done?

 Define what you mean by trust and validate. For bonus points, define
 ownership.

  2.   If ownership of PTR targets is not routinely validated, is
 there a risk that the target domain could be blacklisted by anti-spam
 providers?

 Again, please define validate.

 AFAICT organisations like Spamhaus don't care about PTR records at all.
 Addresses get blacklisted because they send spam or are open mail relays or
 are known to be in prefixes used for residential customers or Whatever
 names may be associated with those addresses are unlikely to matter,
 regardless of what validation is done or not done.

 If you want to know what anti-spam organisations do with PTR records, I
 suggest you ask them directly.



 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] signing reverse zones

2014-02-12 Thread George Michaelson
I am probably saying this badly, and I regret any implied
teaching-your-granny-to-suck-eggs. Thats not my goal.

I understand Reverse-DNS as a namespace which contains assertions made by
the entity which controls the next-highest do-separated tlabel delegation
point, or the hunt backwards to the root. That doesn't have to BE the
immediate rightmost dot if you are a /24, the /16 might not be delegated.
Likewise there may only be a /24 and the intermediate /16 may not exist as
a delegation point: the /24 may reside directly in an RIR /8 delegation
zone file.

So, in that respect, it represents an assertion of PTR made by a 'parent'
entity over the addressblock in question.

Because it is forced to align to the dot boundary of the mapping of an IPv4
or IPv6 address, which do not align cleanly with the CIDR boundary of
routing, It has a more limited applicability to statements over routing.

There are people who believe there are ways round that but its
work-in-progress. (which btw, I am not involved with and I make no claims
as to its viability or otherwise. I do personally think a clean CIDR/prefix
respecting namespace in DNS would be interesting and useful)

So noting that it has limited applicability in the wide to the exact-match
of routing, It does at least have relationship to the management of the
address space and respects a top-down delegation/hierarchy model of
management. Again, I am not trying to say what should/should-not be
regarding that, just that it does follow a hierarchy, which can be useful,
because trust stems from the root TA out of band, so a signed DNS reverse
space represents a series of encompassing hierarchical trust statements
from the root down.

Parent assertions can be useful. Signed parent assertions can be useful.
They can include information which materially says for more information go
here so in principle, they can empower address holders, under a suitable
framework, to make trustable assertions about an IP address.

Can anyone think of reasons why they might want to do that?


On Thu, Feb 13, 2014 at 8:03 AM, Mark Boolootian boo...@ucsc.edu wrote:

 Hi Randy,

  I'm interested in knowing if it is standard practice amongst folks to
  sign .arpa zones.  Is there a compelling use case for signing reverse
  zones?
 
  standard practice?  you some kinda control freak?

 Learned at the feet of the masters (and thank you :-)

  first there is the arguments about whether reverse zones are useful and
  should be populated.  i happen to use reverse lookup daily, so i try to
  maintain them well for all the address space for which i am responsible.

 We do likewise.

  so, given that i am gonna maintain the zone, why would i not want to
  also sign the data?  the amount of work is trivial, and it's just one
  more step in trying to paint security on the horribly insecure internet.

 I was anticipating more of a beating for my question, but apparently
 there is an overabundance of politeness here :-)All points taken.

 mark
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Geoff Huston on DNS-over-TCP-only study.

2013-08-21 Thread George Michaelson
Thanks for the clarification. We did in fact detect initial configuration
issues with the default TCP 3 backlog, but once we'd put this up to 2000 we
only had one brief window of RST congestion as detected by a simple TCP
filter. This test was for a domainspace which serves around 250,000
experiments per day, each representing 4 DNS queries, none of which could
be cached. So it was at 1,000,000 q/day which is obviously a low sustained
query rate of around 10 q/sec. I suspect with better kernel knowledge we
could have avoided any server forced RST and serve a higher load.
Certainly, a TCP based DNS service faces a lot of questions about how its
designed and scaled.

I believe our goal was to find out how many clients, measured by resolver,
failed to complete a TCP forced DNS query. Other people will be looking at
the server side, that wasn't what we were primarily exploring. People who
want to consider TCP based DNS need both sides of the questionspace filled,
so choosing to analyse client failure isn't the whole picture, but it is
part of the picture.

Your canard reply makes much better contextual sense now

cheers

-george


On Wed, Aug 21, 2013 at 4:16 PM, Paul Vixie p...@redbarn.org wrote:



 George Michaelson wrote:
  ...
  So, while I understand we're not DNS experts and we may well have made
 some mistakes, I think a one word 'canard' isn't helping.

 there is no way to either get to or live in a world where dns usually
 requires tcp. there would be way too much state. most people are capable
 of writing the one-line perl script that will put a dns responder into
 tcp exhaustion and keep it there at very little cost to the attacker,
 but those same people can read section 5 of RFC 5966 and not see the
 threat. granted that if all name servers miraculously implemented the
 recommendation servers MAY impose limits on the number of concurrent
 TCP connections being handled for any particular client then the perl
 script would have to be longer than one line, there's just no world there.

 had the original dns tcp protocol been structured so that the server
 closes and the clients won't syslog anybody or otherwise freak out when
 the server closes, we could imagine a high transaction rate on
 short-lived connections. tcp's 3xRTT and 7-packet minimum would seem
 harsh but at least we'd have some hope of goodput during deliberate
 congestion attacks.

 an experiment that looks at this from the client's point of view tells
 us nothing about the server's availability during congestion. i could
 wish that measurements of tcp dns performance would include a caveat
 such as this has not been tested at internet scale or even
 internet-wide dependence on dns tcp may be vulnerable to trivial denial
 of service attacks.

 almost everybody who looks at this says just use TCP. if the solution
 to the bcp38 problem in DNS were that easy, we would not have written
 
 https://www.usenix.org/legacy/publications/login/2009-12/openpdfs/metzger.pdf
 
 and william would not have written RFC 6013.

 it's also worth looking again to
 http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-02.

 vixie

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Geoff Huston on DNS-over-TCP-only study.

2013-08-20 Thread George Michaelson

On 21/08/2013, at 3:23 PM, Paul Vixie p...@redbarn.org wrote:
 
 Dobbins, Roland wrote:
 http://www.circleid.com/posts/20130820_a_question_of_dns_protocols/
 
 canard.
 

We invested quite a lot of time re-checking things with a shorter EDNS0 limit 
coded into bind, to confirm the TCP failure rate, without the use of the CNAME 
to force the initial response over the limit. (ie, removing the complication of 
the CNAME intermediary) It was interesting that even when the A record 
information appears to be in the TC response, people ignore it and fall back to 
TCP anyway. I had worried the presence of valid answer and truncate in 
additional would cause some number of tested people to take the pre-truncation 
data anyway. it doesn't appear to happen.

The results with a simpler A-only forced TC test the same: we see a gross rate 
of resolver failure to complete at 17% and a user rate of 2% bearing in mind 
the extensive use of google 8.8.8.8 and in general, 2+ resolvers per client.

So, while I understand we're not DNS experts and we may well have made some 
mistakes, I think a one word 'canard' isn't helping.

-G





___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


[dns-operations] are we adding value?

2013-01-15 Thread George Michaelson

maybe its just me, but I think most of the 'add complexity' being discussed 
here is fruitless, and devalues DNS. Its retrofit on a simple protocol to try 
and cover for situations not forseen, which I believe is very often 
counter-productive.

We don't continue to use telnet in the wide any more, and moved to SSH. It 
doesn't mean that telnet option negotiations are 'wrong' but it does mean that 
the telnet protocol isn't the one which services the need we have any more, for 
telematic/interactive access services.

telnet as it remains doesn't have a heap of post-2000 knobs added. If you want 
those features, you go somewhere else. Its been left fit for purpose in a 
narrowly defined role.

I think the same is true of DNS. its a global label to value lookup service 
with a nice, small definition of the separator and the cut point, and some 
guidance on TTL/cacheing. We've retrofitted the beginnings of some security, 
but at considerable cost, and for an outcome which is now showing problems like 
the amplification attack effects.

I think sending a stronger message about uRPF type defences, and asking other 
people to look at spoof source is better. Sometimes it pays to recognise you 
can't solve a problem, and look to who can. After all, if we reduced the amount 
of spoofed source, then we'd reduce attack modes in more than just DNS. the 
'real' problem here isn't DNS spoofed-source attacks, its spoofed-source 
attacks. If for instance, somebody discovers a way to use this in HTTP and 
achieve a 1000x amplification, they won't just be using DNS will they? 

(I know, tcp doesn't work. But you get the sense of what I mean. spoofed UDP 
streams of video might work?)

I realize it won't completely work, and that there will 'be' a problem to be 
solved here, and I also think that the kind(s) of solutions which increase the 
cost on the spoofer are probably the best we have right now, combined with some 
amount of probabilistic/heuristic dropping, but I still find myself thinking 
this is just turning the value equation in DNS right down

We're in a world where the goal is to answer questions, quickly and accurately. 
The fixes are beginning to look like major attacks on that fundamental.

I'm also confused about the 'no more ANY' discussion. Maybe I over-read, but I 
think ANY is a useful query, and I think ending it entirely would be a mistake. 
ANY allows for queries where you don't know the specific payload you need. DO 
we really want to remove that?

-G
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


Re: [dns-operations] responding to spoofed ANY queries

2013-01-10 Thread George Michaelson
On Jan 10, 2013 7:49 PM, Jim Reid j...@rfc1035.com wrote:


 It would be nice if ANY queries just got thrown away. I can live with the
breakage that causes. YMMV. However if there was something that generally
blocked or discarded ANY queries, the bad guys would switch to some other
QTYPE that can't be blocked without causing significant operational
problems.

 ___

What makes you think they won't? I mean, isn't this a classic mistake of
cold war defense modelling, that you assume your enemy will use weapons you
can confidently defend against and ignore the ones you suspect you cannot?

ANY has good amplification. If its not working, they surely will move to
others. Or both. And if it is working they may move to others anyway.

G
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] Summary: Anyone still using a Sun/Oracle SCA6000 with OpenSSL?

2012-10-14 Thread George Michaelson

On 15/10/2012, at 3:10 AM, Ondřej Surý ondrej.s...@nic.cz wrote:

 Just a question - would anyone would be interested in joining a project to 
 build an OpenHardware FPGA-based HSM with focus on DNSSEC?
 
 O.
 

APNIC has no skills in FPGA level design and construction, but I would be very 
interested in participating in the specification of the user space requirements 
for this work.

I'm particularly interested in its ability to support a key migration mechanism 
which would prevent capture of the signing materials by a single 
implementation. We're finding that the equipment we have now, supports simple 
mechanisms within the single vendor chain (so you can upgrade) but has no good 
inter-provider key exchange mechanism, and we've had a similar experience with 
other (DNSSEC) solutions.

A recent conversation suggests that there are no good standards in this space, 
and that a public-private key exchange between the HSM is the way to go, 
possibly augmented by a shared-secret generated on-box and shared between them.

-G


 On 16. 8. 2012, at 2:24, George Michaelson g...@apnic.net wrote:
 
 
 I got 8 replies. 2 ccTLD, 2 root Ops, almost everyone in s/w development or 
 operational related roles, and some independent consultants.
 
 Only one happy user, and I'd qualify that: they'd want a longterm migration 
 plan off the device. This person is using Solaris.
 
 Everyone said avoid more than 255 keys on the device. Several said use the 
 import/export mechanism.
 
 Two people explicitly mentioned the bad Linux driver. 
 
 The overall tone of the (small sample) responses is: this is not a good 
 choice right now
 
 
 My context is not DNSSEC, its RPKI, which has a far larger keypair 
 requirement. Noting a suggestion to re-use keypairs, I'd still have to 
 risk-manage future potential for multiple keys per hosted client, and exceed 
 the on-card keystore size, so the suggestion to use the import/export 
 features makes sense. Having said that, documentation on this is really 
 scant, and its hard to confirm how easily you can manage this given there is 
 no explicit OpenSSL PKCS11 support for managing PKCS12 wrapped objects, and 
 you are therefore using a java or shell command to do the key import, 
 followed by OpenSSL engine, followed by shell/java to remove the key. 
 
 If you use a pure Java solution its probably more tenable.
 
 Thank you to everyone for the response. I hope this summary meets a sense of 
 privacy, and OT posting.
 
 -G
 ___
 dns-operations mailing list
 dns-operations@lists.dns-oarc.net
 https://lists.dns-oarc.net/mailman/listinfo/dns-operations
 dns-jobs mailing list
 https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
 
 --
 Ondřej Surý -- Chief Science Officer
 ---
 CZ.NIC, z.s.p.o.--Laboratoře CZ.NIC
 Americka 23, 120 00 Praha 2, Czech Republic
 mailto:ondrej.s...@nic.czhttp://nic.cz/
 tel:+420.222745110   fax:+420.222745112
 ---
 

___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs

Re: [dns-operations] go daddy refuses to register NS not otherwise associated with go daddy controlled domains

2012-09-11 Thread George Michaelson

On 12/09/2012, at 10:27 AM, Mark Jeftovic mar...@easydns.com wrote:

 I don't understand this, they are saying they will only do this for
 domains under their management, implying that this domain isn't.

Yes. I am saying that using their GUI, to manage a domain which is managed with 
them, they limit NS to

1) in-baliwick, defined in a hosts.txt file they control and expose a UI to

2) any NS already in use by you, in any domain, under godaddy control

3) any grandfathered in NS, but I cannot verify if this remains true if you 
drop them from the record of any domain.

 
 But you later say, only godaddy can modify the whois record for this
 domain, which means godaddy is the registrar of record.

yep.

 
 So you do this through the registrar of the parent domain, godaddy -
 what am I not getting? Are they not doing it because you're not actually
 using their nameservers?


I am attempting to define NS delegation outside of the zone, and not using 
their NS.

 
 The only exception I can think of is when you need to create a
 nameserver record in the gtld roots for a nameserver that is in some
 other ccTLD namespace - many registrars won't do that.
 
 - mark

Yep. I'm learning (fast) that this is understood. its subset of possible NS, 
restricted to specific values, to avoid some bad behaviour but as a consumer of 
DNS services, I didn't notice the restriction coming in. A new entrant would be 
presented with a fait accompli.

because of the grandfathering of 'strange' NS in imported WHOIS records, I 
suspect that many of us won't have noticed.

-G
 
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


[dns-operations] Summary: Anyone still using a Sun/Oracle SCA6000 with OpenSSL?

2012-08-16 Thread George Michaelson

I got 8 replies. 2 ccTLD, 2 root Ops, almost everyone in s/w development or 
operational related roles, and some independent consultants.

Only one happy user, and I'd qualify that: they'd want a longterm migration 
plan off the device. This person is using Solaris.

Everyone said avoid more than 255 keys on the device. Several said use the 
import/export mechanism.

Two people explicitly mentioned the bad Linux driver. 

The overall tone of the (small sample) responses is: this is not a good choice 
right now


My context is not DNSSEC, its RPKI, which has a far larger keypair requirement. 
Noting a suggestion to re-use keypairs, I'd still have to risk-manage future 
potential for multiple keys per hosted client, and exceed the on-card keystore 
size, so the suggestion to use the import/export features makes sense. Having 
said that, documentation on this is really scant, and its hard to confirm how 
easily you can manage this given there is no explicit OpenSSL PKCS11 support 
for managing PKCS12 wrapped objects, and you are therefore using a java or 
shell command to do the key import, followed by OpenSSL engine, followed by 
shell/java to remove the key. 

If you use a pure Java solution its probably more tenable.

Thank you to everyone for the response. I hope this summary meets a sense of 
privacy, and OT posting.

-G
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs


[dns-operations] Anyone still using a Sun/Oracle SCA6000 with OpenSSL?

2012-08-12 Thread George Michaelson

Could anyone who is still using an Oracle/Sun SCA6000 card please contact me 
off-list.

I am doing device comparisons, and need to confirm some behaviours with OpenSSL 
Engine

1) can it do SHA256?

2) whats the effective sign rate when you have 10,000 to 20,000 
keypairs or is the key volume irrelevant?

3) although not EOL, I am not seeing an increase in platforms: Its 
Solaris, or RedHat Officially. no BSD.

Sorry to (ab)use the list, but information from google and other searches 
suggest little movement since 2009/2010 timeframes and information on the 
product is hard to find.

Even post-Oracle, it has a significant price advantage compared to HSM which 
have this key volume built in.

-George
___
dns-operations mailing list
dns-operations@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations
dns-jobs mailing list
https://lists.dns-oarc.net/mailman/listinfo/dns-jobs