Re: Processing of Expired Internet-Drafts
Definitions: draft name = filename stem, excluding version number (e.g., draft-iab-dos) version number = two-digit I-D version number filename = full filename, composed of draft name, version number, and a format suffix (e.g., draft-iab-dos-00.ps) (I've seen the term filename in this context used variously to refer to (a) the full filename, (b) what I call the draft name, and (c) the combination of draft name and version number.) Fred Baker wrote: If I can have two separate files (a tombstone and a subsequent new file version) that have the same name, as described in the recent announcement, I am going to have to figure out a trigger that will tell me that I need to re-download the file. I, too, dislike the idea of these filenames being ambiguous. I'd prefer that the tombstone file is the only file that ever gets that combination of draft name and version number. Any subsequent version of the draft should get a new version number, higher than that used by the tombstone. according to the above procedure we are going to accumulate empty tombstone files at a rate of perhaps 1000-2000 per year ad infinitum. They're not empty. I see no need for them to remain indefinitely -- six months seems fine to me -- though if we're going to maintain strong uniqueness of draft names then at least the names need to be recorded indefinitely. -zefram
Re: /48 micro allocations for v6 root servers, was: national security
Bill Manning wrote: /35 routes are being discouraged in favor of /32 entries... 4,064,000,000 addresses to ensure that just one host -might- have global reachability. IMHO, a /48 is even overkill... :) Just wondering, as I have about IPv4 anycast allocations: why can't we designate a block for microallocations, within which prefix length filters aren't applied? The number of routes in the DFZ is the same either way; is there any technical reason why /64 or /128 prefixes, or /32 in IPv4, can't be used? I'm not a routing person, so apologies if this is somehow unspeakably dumb. -zefram
Re: /48 micro allocations for v6 root servers, was: national security
[EMAIL PROTECTED] wrote: Imagine if somebody flubs and withdraws a /12 and announces a /12 worth of /28 That's why I suggested relaxing the filters only within a designated block. So (for IPv4) the /12 worth of /28s gets ignored, but the /32s in the micro-allocation block are accepted. It always seemed odd to me that we allocate a /24 per anycast service, and worry about the address space wastage, when all the anycast services we can expect to find useful in IPv4 will comfortably fit into less than a single /24. If there's a problem due to the need for 100% implementation of the relaxation of prefix length filters, we should allocate a micro-allocation block for IPv6 *now*, while the number of routers requiring reconfiguration is relatively small. I propose 0:1::/32, which is distinctive, causes no fragmentation, and is in a region of the address space already recognised as being for weird stuff. -zefram
arguments against NAT?
A new sysadmin has recently joined the company where I work (I am a software engineer and part-time sysadmin). As he's the only full-time sysadmin here, the network now falls under his purview. Today he showed me his plans for reorganisation of the network, and they involve introducing NAT on a big scale. His main arguments in favour of NAT are security (which I debunked), address shortage (which we don't have), and administrative convenience (which he never explained enough for me to see). I've argued strongly against NAT, but he's one of those people who seem to be willing to accept arbitrary amounts of pain (we don't need to use [protocols that put IP addresses in payload], timeouts aren't a problem). I'm now pointing him at some relevant RFCs. My question for the list is is there a web page or other document anywhere that comprehensively states the case against NAT? -zefram
Re: arguments against NAT?
Spencer Dawkins wrote: Yeah, but this was the point. Where is the community consensus document that says all this? RFC 2993 is the closest thing I could find to what I want, and it's rather good (thanks Tony), so it's at the top of the reading list I've sent to the new sysadmin. I'll be interested to see his reaction to it. Looks like disaster may be averted in my case: the responsible manager, in a rare fit of technical eptitude, has spotted that the new sysadmin is interfering in things that already work. Strangely, address allocation is just about the only thing on this network that *isn't* broken. -zefram
Re: I-D ACTION:draft-josefsson-dns-url-09.txt
[EMAIL PROTECTED] wrote: Title : Domain Name System Uniform Resource Identifiers Author(s) : S. Josefsson Filename: draft-josefsson-dns-url-09.txt 0. On careful reflection, I agree with Paul Vixie's analysis that concludes that the dnsauthority part of the URI does not belong here. It should be removed from the syntax. 1. A related issue, which I raised last time this was discussed but was never addressed: there's a general extension mechanism, but no reasonable use for it. This URI type should express solely a class, name, type tuple; the extension mechanism should be abandoned. 2. There is no reasonable default for the dnstypeval element. This draft specifies a default of type A, which will cause confusion; explicit specification of the type should be mandatory. 3. Multiple types, or multiple classes, may be specified, but only one takes effect. Allowing dns:host.example.org?TYPE=A;TYPE=TXT to be valid, and to mean the same thing as dns:host.example.org?TYPE=A, is misleading. It should only be permitted to specify one type and one class. (This issue was raised last time this draft was discussed, but has been fixed in the wrong way.) 4. Although allowing dnsname to be empty is not necessarily wrong, it is inconsistent with prior practice. It would be clearer, and more consistent, to require the root domain to be represented by an explicit .. (Another issue patched in the wrong way.) 5. The scheme described to encode a . within a DNS label is inconsistent with basic URI syntax. Section 2.3 of RFC2396 says Unreserved characters can be escaped without changing the semantics of the URI. Since . is unreserved, this means that . and %2e in a URI must be equivalent. dns:foo.bar.example?type=TXT and dns:foo%2ebar.example?type=TXT must refer to the same RRset. One possible solution is to use a reserved character (perhaps ,) to separate DNS labels within the URI, but this is pretty ugly. A more feasible solution is to use another layer of escaping; RFC1035 provides a perfectly good and familiar (to DNS administrators) escaping scheme for domain names. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: FYI: BOF on Internationalized Email Addresses (IEA)
Dave Crocker wrote: This poses a fundamental barrier for users needing mail addresses to be expressed in a richer set of characters, I have yet to see this need established. Everyone who has supported internationalised mail addresses has axiomatically assumed such a need, and has conspicuously failed to provide any more detail, such as any of Keith Moore's suggestions. I think the first task in this area should be to investigate the nature and degree of desire for non-ASCII local parts. This desire needs to be weighed against the benefits we derive from writing all local parts in a small, fixed alphabet (ASCII printables). -zefram
Re: FYI: BOF on Internationalized Email Addresses (IEA)
Mark Crispin wrote: In many regions where Latin diacriticals are used, there is no acceptable transform of a surname to a form that does not use diacriticals. Simply omitting the diacritical causes (at least to the inhabitants of those regions) a misspelling. Ah, this one's easy. Local parts aren't limited to Latin letters, they can use all the ASCII printables. Diacriticals are available there, albeit in characters that are shared with other uses. [EMAIL PROTECTED] is a perfectly valid email address already. It doesn't start to get tricky until we get into the eastern European languages -- ASCII only intentionally provides western European diacriticals. Cue the debate about whether the diacritic should go before or after the base letter. -zefram
Re: [Fwd: [Asrg] Verisign: All Your Misspelling Are Belong To Us]
Today VeriSign is adding a wildcard A record to the .com and .net zones. This is, as already noted, very dangerous. We in the IETF must work to put a stop to this attempt to turn the DNS into a directory service, and quickly. I suggest the following courses of action, to be taken in parallel and immediately: 0. Urgently publish an RFC (Wildcards in GTLDs Considered Harmful, or DNS Is Not A Directory) to provide a clear statement of the problem and to unambiguously prohibit the practice. 1. Via ICANN, instruct Verisign to remove the wildcard. 2. Some of us with sufficiently studly facilities should mirror the COM and NET zones, filtering out the wildcards. Then the root zone can be modified to point at these filtered COM and NET nameservers. 3. Instruct ICANN to seek another organisation to permanently take over COM and NET registry services, in the event that Verisign do not comply with instructions to remove the wildcard. I believe that the direct action I suggest in point 2 is necessary, because we have previously seen the failure of the proper channels in this matter, when Verisign added a wildcard for non-ASCII domain names. Verisign have shown a disregard for the technical requirements of their job, as well as displaying gross technical incompetence (particularly in the wildcard SMTP server). I believe Verisign have forfeit any moral right to a grace period in which to rectify the situation. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: [Fwd: [Asrg] Verisign: All Your Misspelling Are Belong To Us]
Dean Anderson wrote: Is it any worse than IE taking you to msn search when a domain doesn't resolve? Or worse than Mozilla taking you to Netscape, duplicating a Google search, and opening a sidebar (and a netscape search) you didn't want? Yes, it is worse. Much worse. There is a fundamental difference between this defaulting happening in the DNS and happening in a client program. It is necessary that the wire protocols distinguish between existence and non-existence of resources in a standard manner (NXDOMAIN in this case) in order to give the client the choice of how to handle non-existence. If IE wishes to default to doing a web search under those circumstances, that is silly but harms no one else. What Verisign has done pre-empts that choice for everyone. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
typo wildcard I-D
I've just submitted an Internet-Draft concerning the wildcard issue. For those who can't wait for it to appear in the internet-drafts directory, http://www.fysh.org/~zefram/typo_wcard/. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: You Might Be An Anti-Spam Kook If ...
Vernon Schryver wrote: I've been compiling a list in the style of Jeff Foxworthy. You Might Be An Anti-Spam Kook If Please publish this as an RFC. A collection of unworkable approaches to the spam problem (anti-spam anti-patterns) is useful knowledge that should be preserved and promulgated to reduce the Anti-Spam Kook problem. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: A modest proposal - allow the ID repository to hold xml
Rosen, Brian wrote: Allow the submission of an xml file meeting the requirements of RFC2629 along with the text file (and optional ps file) for an Internet Draft. The value in this would be that it provides everyone with the document source, suitable for generating patches for the author. This is useful, but if it's going to be allowed with XML then we should also allow it with nroff, which historically we haven't. I don't have particularly strong feelings either way, but I do think these two cases should be treated equivalently. -zefram
Re: FW: Virus alert
Christian Huitema wrote: By the way, the worm does not only include its own SMTP service. It seems to also include its own DNS code, probably in order to get the MX records of its targets. This DNS agent is parameterized to start any look-up at the A-root, with the side effect of overloading this root server. Does this mean we can stop the virus and associated spam just by switching off the A root? -zefram
Re: the end-to-end name problem
S Woodside wrote: we must walk down to the 5th definition before we come to the one that is relevant. [2] 1. end -- (either extremity of something that has length; the end of the pier; she knotted the end of the thread; they rode to the end of the line) This definition looks relevant. More relevant than the fifth. -zefram
Re: concerning draft-josefsson-dns-url-08.txt
Simon Josefsson wrote: The intention was to make sure future extensions aren't disallowed by the syntax. Yes. I'm wondering what kind of extensions might be permitted. I'm concerned, as with several recent protocols, that perhaps an extension mechanism has been added simply because it's very normal for IETF protocols to have an extension mechanism, with no real thought about what class of semantic features might be expressed by extensions. It wouldn't be the first instance of cargo cult protocol design. If the intent of the dns URI type is to express a name,type,class tuple, then presumably any extension would be making the URI express something other than such a tuple, in which case I think it would be better to use a new URI scheme rather than overload an existing one. Can you elaborate? The intention is that, e.g., a binary label with the ASCII value 0x17 is expressed as dns:%17. There are some text and examples for this. No, that's a text label containing a single octet with value 0x17. I was referring to RFC2673 bit labels. Consider also what should be done for other EDNS0 extended label types (RFC2671). -zefram -- Anderw Main (Zefram) [EMAIL PROTECTED]
Re: concerning draft-josefsson-dns-url-08.txt
Simon Josefsson wrote: one added some text to clarify that it was actually intended to allow for zero length dnsname's (to denote the DNS root). This is technically correct, according to RFC 1034, but will be confusing. dns: intuitively looks incomplete. It's more conventional to name the root domain in absolute form, as .. An interesting comparison: the dig DNS lookup tool from the BIND folks doesn't accept as a domain name; it insists on the root domain being specified as .. So I argue that requiring the root domain to be represented as . in the context of the URI, forbidding a zero-length dnsname, will make for a clearer protocol, more likely to be implemented correctly. This isn't an absolute matter, though; both versions of the protocol are workable. -zefram
Re: concerning draft-josefsson-dns-url-08.txt
Paul Vixie wrote: we're going to give the URL syntax a way to designate a server other than the one which is authoritatively responsible for the data, or a local caching resolver (also called a recursive nameserver), then we need to be able to set the RD bit. What I understand you saying in the above is that when querying a non-authoritative nameserver we need to have RD=1 in the query so that we'll get an answer even if the nameserver doesn't have the answer already cached. This is true, but it is a detail of the query mechanism; why does it need to be represented explicitly in the URI? Or, to put it another way, what about the current URI definition requires queries to be made with RD=0? I'm not sure I've correctly understood what you've said, so these questions might not make much sense. My understanding of the hostport part of the syntax is that it allows one to specify which server's view of the domain name space one will look at. (Particularly with split DNS, this makes a difference.) Where it's not specified, presumably the default will be one's configured recursive resolver, which actually is not necessarily accessed by the DNS protocol over UDP as is implied where the hostport part is specified. (The draft says Unless specified in the URI, the server (hostport) is assumed to be locally known.) You seem to see the default as being whatever nameserver is authoritative for the data addressed by the URI, which suggests that you have a radically different idea of the semantics of the hostport syntax. How do you see it working? -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: concerning draft-josefsson-dns-url-08.txt
Paul Vixie wrote: PQ: the url scheme does not allow the RD bit to be set. The draft says A DNS URI designates a DNS resource record set ... by domain name, type, class and server.. That is, it identifies a location within the abstract domain name space, a location which may contain an RRset. The setting of the RD bit doesn't affect which RRset is designated; all it can affect is whether a query succeeds in returning the requested RRset. Nothing in the DNS URI so far refers to a specific query mechanism, and indeed I can see reasonable uses for the URI form that don't involve making any queries, via the DNS protocol or otherwise. In summary, adding control of the RD bit would turn the DNS URI into something else, something that I don't think it should be. I see a couple of editorial nits, quite separate from the above: 0. The ABNF for dnsqueryelement uses | instead of / characters to separate branches. 1. The grammar allows dnsname to be zero-length. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: Impending Publication: draft-iab-service-id-considerations-01.txt
|If intermediate systems take actions on behalf of one or more parties |to the communication or affecting the communication, a good rule of |thumb is they should only take actions that are beneficial to or |approved by one or more of the parties, within the operational |parameters of the service-specific protocol, or otherwise unlikely to |lead to widespread evasion by the user community. I think this statement gives dangerously wide latitude for intermediate systems to damage end-to-end-ness. It seems to me that a router should only do something outside fundamental routing behaviour when this has been explicitly approved, either through protocol negotiation or through manual configuration, by sufficiently many affected parties that the others can't tell that anything out of the ordinary is happening. To perceive some action as beneficial to ... one or more of the parties does not make it so. Not only is human history in general littered with examples of evil done for their own good, but also within recent networking history some of the problems that the draft is responding to have been caused by intermediate systems trying to be helpful. A major problem is that most of these attempts to be helpful are attempts to be helpful to humans, which end up being unhelpful to computers and to those humans that interact with computers as we do; few people outside the IETF are truly competent to judge what is beneficial to someone else within the context of computer networking. Do not underestimate the degree to which computer systems are designed by managers. In the nexxt clause, it's not clear whether within the operational parameters of the service-specific protocol is intended to be ANDed or ORed with the beneficial ... or approved ... clause. If it is intended to be ORed, I find this also to be dangerously broad. Finally, unlikely to lead to widespread evasion is another criterion that anyone who needs to be told this rule won't be competent to judge. Overall, I think that, particularly in such an official statement as this, the IAB needs to be very conservative about Internet architectural matters. We should maintain the current situation wherein firewalls are recognisably a breach of the Internet architecture, at least until we've worked out a way to do them that doesn't cause surprising behaviour. This Internet works (ish), and we need to keep it that way: if we break it now, we'll probably never be able to get the popular momentum required to replace it with a new one that works. -zefram
Re: Certificate / CPS issues
Dan Kohn wrote: Regarding a passport mechanism, have you taken a look at www.habeas.com? Specifically, they offer such a this is not spam warrant mark, and the pricing for individuals is free. The trick is that they use copyright and trademark law as the enforcement mechanism. I'm surprised that Habeas has caught on even to the extent it has, as I see a fatal flaw in this use of copyright to get legal control. It is reminiscent of a legal case I recall where a games console refused to execute any game unless a certain copyright notice (ascribing copyright to the console manufacturer) appeared in a known location in the game ROM. Third-party game manufacturers put the notice in their games, and were hauled into court. The court held that the copyright notice in question was a functional part of the interface between the game and the console, and that this overrode its normal semantic of signifying copyright ownership. The notice that the console looked for therefore didn't mean anything legally, and the notice elsewhere in the game (the real notice, ascribing ownership to the game manufacturer) was the legally significant one. What Habeas does is akin to publishing a protocol or file format in a copyrighted document, and then suing implementors of the protocol for copying sections of protocol data from the document. It's mixing expression with content (copyright protects one and not the other). We wouldn't stand for it if Microsoft did that, and I don't think the courts would either. In fact, Microsoft has tried things very similar to this, and we didn't stand for it. I'm astonished that members of this overwhelmingly pro-freedom and pro-reverse-engineering subculture have themselves tried to deploy exactly the same odious trick. Although I approve of Habeas' aims, for the sake of our freedoms I hope that their technique is held to be ineffective. FWIW, my spam filter picks up Habeas headers as a strong ham indicator, appearing in 0.4% of ham in my corpus and not at all in spam. I'm waiting to see which of these numbers is going to increase more significantly. -zefram -- Andrew Main (Zefram) [EMAIL PROTECTED]
Re: spam
Russ Allbery wrote: How many people who are active and have been active for some time on the Internet are still putting e-mail addresses on web pages and then reading them with no spam filtering whatsoever, just looking through the inbox and deleting what isn't wanted? I do, and I don't think I've ever had a false positive. I get 5 to 10 spam messages per day, though this is increasing. Professionally, I'm currently developing a Bayesian spam filter. My intention is to eventually apply such a system to my private mail. The results I've had so far are enough to almost entirely solve my spam problem for the next couple of years: it currently stops 98% of spam in tests, and gets false positives only on messages that look a lot like spam to me. I also know I can still improve the filter a long way, a lot faster than the spammers can learn to dodge such filters. With Bayesian filters being this effective, I'm not worried about spam in the immediate future. Proper application of filtering keeps us inherently one step ahead of the spammers in the arms race -- they have to do something *completely* unlike what they've done before in order to fool us. For this reason, I'm opposed to those anti-spam measures that reduce the email network's tolerance for unsolicited or anonymous messages. I've seen rather too many proposals that would destroy the openness of the email system in order to make spam more difficult -- and most of them would disproportionately hurt legitimate email, putting up barriers that spammers would easily bypass. Whether the receiving-end filtering approach is viable in the long term is debateable. When 99.9% of email is spam, the filtering world will look very different. Perhaps at that point it'll be infeasible to continue allowing unsolicited email at all. -zefram
Re: Searching for depressing moments of Internet history.....
[EMAIL PROTECTED] wrote: Why the creation of the first porn site is a depressing moment I'm not sure. A few years ago I found myself wondering why most commercial websites are so garish, flashy, fragmented, and awkward to navigate. I came to the conclusion that they'd all taken their stylistic lead from the very first commercial websites: the porn sites. Is it not depressing that porn-pushing techniques have been adopted into, and come to dominate, mainstream commerce? (Happy lynx user here, it's not often that I'm forced to see how visually appalling these sites have been designed to be.) -zefram
Re: Last Call: Extensions to FTP to Proposed Standard
The IESG has received a request from the Extensions to FTP Working Group to consider Extensions to FTP draft-ietf-ftpext-mlst-13.txt as a Proposed Standard. There are a couple of unclarities in the specification that should be cleaned up: Section 2.4 states that unless stated to the contrary, any reply to any FTP command ... may be of the multi-line format. It doesn't say what counts as stating to the contrary, except to say that ABNF indicating a single-line response format is not sufficient to require a single-line response. This makes those parts of the specification that provide a more restricted syntax for certain replies ambiguous: it isn't clear where multi-line responses are permitted. This happens in section 3.1 (defining a successful MDTM reply) and 4.1 (SIZE): both give ABNF that on its face allows only for a single-line reply, and in neither case is the applicability of 2.4's implicit multi-line reply rule explicitly addressed. Neither section says what a valid multi-line success reply (to MDTM or SIZE) would look like. I guess that multi-line replies aren't intended to be permitted in this case, but in the light of section 2.4 it's unclear. Shouldn't we instead have an explicit note in cases where an ABNF rule is intended to be taken *non*-literally? Section 3.1, on the MDTM command, says Attempts to query the modification time of files that are unable to be retrieved generate undefined responses.. The phrase undefined responses sounds like it permits what would otherwise be violations of the protocol, such that the client cannot correctly infer any information from what happens later in the session, and in fact that the client cannot determine whether this has happened. Sections 3.1 and 3.2 go on to provide perfectly sensible ways of handling error conditions within the protocol, so undefined responses seem unnecessary. Probably the undefined responses sentence should be removed or modified, perhaps to something like Attempts ... may generate either errors or successful responses that might not be meaningful.. If undefined responses was intended to permit some other historical behaviour, the limits of such behaviour should be specified. -zefram
persistent domain names
Last week I published an I-D, draft-main-sane-tld-00, which argued that there is a need in various Internet protocols for easily-available persistent[1] names within the DNS domain name space. I queried in dnsop, which seemed the most relevant WG, concerning what would be the appropriate place to discuss it. The only consensus reached seems to be that dnsop is not the right place, so I'm throwing this open to a wider audience. The debate in dnsop actually centred on whether this is an IETF matter at all, rather than on which WG it is relevant to. Some said that it's not a technical problem -- it's true that it's not a technical problem in the DNS protocol, it's a problem for other protocols. There were varying opinions on the extent to which ICANN is, or should be, involved. So I'd like opinions from the wider IETF membership: is the problem my I-D describes something that the IETF should be concerned about? Is it something that can be usefully discussed within the IETF? And if so, what is the proper forum? I'm looking for discussion of the problem more than the solution at this stage; my I-D does outline a couple of possible solutions, but considering the issues that have arisen already in respect of the problem statement, solution finding will have to wait a bit. -zefram [1] Persistent in the way that MIME types such as image/vnd.xiff are persistent.