> On 22 Feb 2019, at 9:52 am, Joe Abley <[email protected]> wrote:
> 
> On 21 Feb 2019, at 14:34, Mark Andrews <[email protected]> wrote:
> 
>> Machines die. Machines are unplugged.  Server are unreachable at critical 
>> times. Externally driven cleanup can never be reliable. 
> 
> I'm not disputing any of that. I guess my first question was first whether 
> cleanup is necessary, and second (assuming it is) whether cleanup can't just 
> be handled as an implementation detail as opposed to a protocol extension.
> 
> If the master server that receive UPDATEs matching particular names could be 
> configured to remove them after a suitable interval, wouldn't that do the 
> trick?

No.  How do you make “permanent” changes to the zone using UPDATE?

> Slave servers would get new copies of the zones concerned following the 
> updates in the normal manner. Zone propagation is pretty swift with NOTIFY. 
> The master could apply policy based on criteria like owner name pattern 
> matching or source of the update. Garbage collection might not happen with 
> the kind of split-second accuracy that I sense this mechanism's proponents 
> are suggesting, but does it need to? Don't we believe that applications that 
> expect more than loose coherence from the DNS are broken?

Nothing in this draft changes loose coherence.  Everything is done by the 
master.  The slave has the data so it has the state to become the master when 
the machine that was the master dies.

> I hear and acknowledge there is a desire for this kind functionality (i.e. I 
> believe you that it's necessary) but I'm still not clear on what need there 
> is for interoperability (and hence standardisation). Every DNS implementation 
> contains their own special features that are not standardised and that don't 
> need to be. Couldn't this be another one?

No.  There are plenty of our customers that want features to work cross 
platform.  They also want to be able to switch to a different code base when 
there is a security issue in another one.  They also want things to work as 
well as possible for disaster recovery.  Having the GC information is a vendor 
neutral form achieves these objectives.

There are plenty of customers where all the slaves are transferring data from 
two masters all the time.  Those masters are from different vendors with 
transfers flowing from which ever is currently configured as the ultimate 
master between them.  The may also be configured to transfer from all of the 
slaves as well.  If current master dies / is taken out of service the backup 
master will get the newest copy of the zone that has made it to any of the 
other servers within minutes.  It can then be reconfigured as the active master 
and continue straight away.  Throwing proprietary GC into this does not work.

> I remain open to the idea that I am just missing the point because I don't 
> spend enough time in enterprise or campus networks. I think I'm possibly not 
> the only one in that boat, though, and I don't think it's unreasonable for 
> the draft to explore its applicability and explain clearly why 
> standardisation or in-zone signalling (hence RRs) is necessary as a 
> prerequisite to standardisation. As I mentioned before, the bar for 
> experimental is surely much lower, and the bar for simple codepoint 
> assignment lower still.

What are the terms of the experiment if you want experimental?  What are you 
wanting to discover?  That the protocol works?

> Joe

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742              INTERNET: [email protected]

_______________________________________________
DNSOP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to