Tony Finch wrote:
On Mon, 10 Nov 2008, Keith Moore wrote:
Tony Finch wrote:
In any case, DNSBLs should scale roughly according to the size of the
routing table, not the size of the address space.
What does it mean for a DNSBL to scale?
I was referring to the number of entries that have
Chris Lewis wrote:
So, where's this accountability gap you keep talking about?
The gap is where ISPs can drop my mail without a good reason, and
without my having any recourse against them. The gap increases when
they delegate those decisions to a third party. It increases further
when the
Trying to sum up the situation here:
1. Several people have argued (somewhat convincingly) that:
- Source IP address reputation services can be valuable tools for
blocking spam,
- Such services can be better at identifying spam than per-message
content analysis,
- Such services can (and
Matthias Leisi wrote:
[Disclosure: I am the leader of the dnswl.org project; I provided some
input into the DNSxL draft as far as it concerns whitelists.]
Keith Moore schrieb:
These incidents happen one at a time. It's rarely worth suing over a
single dropped message, and yet
Ned Freed wrote:
Sadly, I have to agree with Keith. While these lists are a
fact of life today, and I would favor an informational document
or document that simply describes how they work and the issues
they raise, standardizing them and formally recommending their
use is not desirable at
John Levine wrote:
standardizing them and formally recommending their use
I'm not aware of any language in the current draft that recommends
that people use DNSBLs.
Standardizing it is an implicit recommendation. In particular it's a
statement that there are no known technical omissions
Dave,
you're mischaracterizing the situation and you ought to know better.
basing reputation on IP address is pretty dubious.
transmitting reputation over DNS is not otherwise-acceptable
and there's at least some argument to be made that this choice of
mechanism lends itself to abuse, or even
John C Klensin wrote:
I'm am beginning to wish for the days at which, at least in
principle, we could standardize something and immediately put a
not recommended label on it.
Well, I have often wished we had a clear label (or maybe even a separate
document series) for things of the form if
--and,
more important, off of-- those lists.
john
--On Friday, 07 November, 2008 18:38 -0500 Keith Moore
[EMAIL PROTECTED] wrote:
DNSBLs work to degrade the interoperability of email, to make
its delivery less reliable and system less accountable for
failures. They do NOT meet
John Levine wrote:
Unlike you, I don't see overwhelming community consensus for
this mechanism.
Aw, come on. There's a billion and a half mailboxes using the
Spamhaus DNSBLs, on systems ranging from giant ISPs down to hobbyist
Linux boxes.
and there's a billion and a half users whose
You keep citing 1.5 billion users. I haven't asked all of them, of
course, but I keep finding that users don't trust their email to be
reliable, and they don't know how to find an email service that is
reliable. Mostly they maintain several different email accounts and try
sending from multiple
Chris Lewis wrote:
Keith Moore wrote:
I think you're missing the point.
Oh, no, I fully understand the point. In contrast, I think you're
relying on false dichotomies.
For example:
Better interoperation of a facility that degrades the reliability of
email, by encouraging an increased
Livingood, Jason wrote:
Keith - I encourage you to consult with several very large scale email
domains around the world to see if they think that DNSxBLs are useful,
effective, and in widespread use or not.
Jason - I encourage you to consult with users whose mail isn't getting
delivered,
Under no circumstances should any kind of Blacklists or Whitelists be
accepted by IETF as standards-track documents. These lists are
responsible for huge numbers of illegitimate delivery failures. We
should no more be standardizing such lists than to be standardizing spam.
Keith
DNSBLs work to degrade the interoperability of email, to make its
delivery less reliable and system less accountable for failures. They
do NOT meet the no known technical omissions criterion required of
standards-track documents.
The fact that they are widely used is sad, not a justification for
First question is: Why have any delay at all? I presume the answer has
to do with broader community exposure -- review and maybe experience,
although 4 months is not enough for serious experience.
If indeed that reason is right, it seems that distribution doesn't
happen until the RFC is
+1
Dave CROCKER wrote:
IESG Secretary wrote:
This statement provides IESG guidance on interim IETF Working Group
meetings, conference calls, and jabber sessions.
Good note.
Historically the work of the IETF has been conducted over mailing lists.
This practice ensures the widest
Julian Reschke wrote:
Paul Hoffman wrote:
...
It sure it. It just turns out to be a terrible format for extracting
text as anything other than lines, and even then doesn't work
reliably with commonly-used tools
...
It's also a terrible format for reading documentation in a Web Browser.
Julian Reschke wrote:
data URIs are available in 3 out of 4 major browsers, with IE8 adding
them as well.
I thought IE8 had some fairly annoying limitations on their use?
format. But data: URLs are not as widely supported as we'd like. Nor
is MHTML. Having multiple files per document
Tim Bray wrote:
The TAG is in fact clearly correct when they state that introduction
of new URI schemes is quite expensive.
To me it seems that this depends on the extent to which those new URI
schemes are to be used in contexts where existing URI schemes are used.
New URI schemes used in
Tim Bray wrote:
On Thu, Aug 7, 2008 at 10:23 AM, Keith Moore
[EMAIL PROTECTED] wrote:
The TAG is in fact clearly correct when they state that
introduction of new URI schemes is quite expensive.
To me it seems that this depends on the extent to which those new
URI schemes are to be used
Tim Bray wrote:
Don't browser and OS libraries dispatch on URI scheme? I guess
it's probably not as easy to extend as adding a new handler for a
new Content-Type, but it's at least possible for a new URI scheme
to start appearing (in email, Web pages, local docs, etc) and for
the user to
Tim Bray wrote:
On Thu, Aug 7, 2008 at 4:47 PM, Keith Moore [EMAIL PROTECTED] wrote:
That's ridiculous.
First of all it's not as if HTTP is an optimal or even particularly
efficient way of accessing all kinds of resources - so you want to
permit URI schemes for as many protocols as can use
Noel Chiappa wrote:
From: Keith Moore [EMAIL PROTECTED]
these limitations don't inherently apply to NAT between v4 and v6,
I thought the inherent problems ... would apply to an IPv4-IPv6 NAT,
when such a device is used to allow a group of hosts with only IPv6
Noel Chiappa wrote:
From: Keith Moore [EMAIL PROTECTED]
You don't need to add it to all (or most) IPv6 implementations.
More hair-splitting...
It's not hair-splitting if it significantly reduces the barrier to adoption.
The goal ... is not to magically make all boxes
I find myself imagining what IETF would be like if anyone who could
claim to be a journalist (say because they have a blog) could get in for
free, subject only to the condition that they could not talk during
meeting sessions.
I think I just might claim to be a journalist (after all, I do
Noel Chiappa wrote:
IPv4 NATs cause problems .. because they rob applications developers of
functionality, make the net less reliable and less flexible, increase
the cost of running applications and raise the barrier for new
applications, and increase the effort and expense
Noel Chiappa wrote:
From: Keith Moore [EMAIL PROTECTED]
But these limitations don't inherently apply to NAT between v4 and v6,
particularly not when the v4 address is a public one.
I don't understand this; I thought the inherent problems you so ably and
clearly laid out in your
Ole Jacobsen wrote:
I agree with Paul.
Having now quickly read the article in question I don't even see what
the problem is, including the somewhat provocative headline.
I think it's a huge problem if the market gets the idea that NATs (as we
currently know them) will also be a part of
Fred Baker wrote:
On Jul 31, 2008, at 5:52 PM, JORDI PALET MARTINEZ wrote:
Some considered that part of the delay of the IPv6 deployment was due
to the lack of communication effort from IETF. I'm not really sure
about that, however I agree that everything helps, of course.
To be honest, I
Noel Chiappa wrote:
Having read it, I think this story is pretty accurate - and that's probably
why some people here are upset about it.
It's not the accurate parts I'm upset about. From the article:
The Internet engineering community working on IPv6 is considering
reintroducing NAT - one
Geoff Huston wrote:
Yes, I stand by what I said in that article. If you disagree with
my perspective on this topic, then perhaps you may want to followup
with me directly, rather than claim some shortcoming on the part
of the journalist.
Well, of course, the things you are quoted as saying
Lixia Zhang wrote:
I'd like to share my own experience here: I was interviewed by exactly
the same reporter some time ago, and I requested to review the article
before publication. But the very next thing I learned was that the
article had been published, and the interview misquoted.
My
Ned Freed wrote:
Lixia Zhang wrote:
I'd like to share my own experience here: I was interviewed by
exactly the same reporter some time ago, and I requested to
review the article before publication. But the very next thing I
learned was that the article had been published, and the
interview
My experience with that reporter is similar. I came to believe that she
saw it as her job to misrepresent whatever information was given to her.
...
Let's not be too harsh. Do we have any reason to believe that media
coverage in this case is less accurate than media coverage in general?
The
one thing to measure is how many WG or BOF chairs say Please don't give
us a Friday afternoon session.
Brian E Carpenter wrote:
The proposed Friday schedule would be:
0900-1130 Morning Session I
1130-1300 Break
1300-1400 Afternoon Session I
1415-1515 Afternoon Session II
Try
Do we spend too much time with overviews of drafts that really should
have been read by all attendees beforehand? Maybe it would be good for
the first session on Monday to be an Area Overview session where an
overview of all the latest drafts can be presented to give people a
broader view of
Eliot Lear wrote:
Bob,
This contradicts Section 2.1 of RFDC 1123, which says an application
SHOULD support literal addresses (and of course DNS support is a MUST) --
Section 6.1.1.)
Within the application space, which is what we were talking about with
RFC 1123, I'd have to say that
John C Klensin wrote:
(iii) The IETF has indicated enough times that domain
names, not literal addresses, should be used in both
protocols and documents that doing anything else should
reasonably require clear and strong justification.
I take issue with that as
Ted Faber wrote:
On Tue, Jul 08, 2008 at 12:54:16PM +1000, Mark Andrews wrote:
hk. is not a syntactically valid hostname (RFC 952).
hk. is not a syntactically valid mail domain.
Periods at the end are not legal.
RFC 1035 has *nothing* to do with defining what
Ted Faber wrote:
On Mon, Jul 07, 2008 at 11:28:05PM -0400, Keith Moore wrote:
The site-dependent interpretation of the name is determined not by the
presence of dot within the name but its absence from the end.
not so. in many contexts the trailing dot is not valid syntax.
Let me
RFC1043 defines the dot. The fact that some apps don't recognize it is a
bug.
not when the application explicitly specifies that FQDNs are to be used.
in such cases the dot is superfluous.
Keith
___
Ietf mailing list
Ietf@ietf.org
1) I do understand where the current last 64 bits are EUId comes from.
2) Someone (I think it was Keith Moore) said that if the scheme doesn't
work for servers AND hosts (i.e no difference) it's a bad scheme. I sort
of agree with that, but the reason it doesn't work for servers is simply
lack
Joe Touch wrote:
Keith Moore wrote:
| RFC1043 defines the dot. The fact that some apps don't recognize it is a
| bug.
|
| not when the application explicitly specifies that FQDNs are to be used.
| in such cases the dot is superfluous.
Superfluous is fine. Prohibited is not. If the app inputs
Joe Touch wrote:
I don't think you get to revise a couple of decades of protocol design
and implementation by declaring that RFC 1043's authors and process
trump everything that's been done afterward.
I'll repeat:
some app misbehaviors are just bugs
not all app misbehaviors
Many, many working groups have looked at the problems associated with
relative names and determined that they're not acceptable. It's a
bug that relative names are forbidden in these apps, nor that the
final . is implicit and in many cases disallowed. These are
carefully considered design
It's nonsensical for an application to decide that relative names are
unacceptable, but to require users to input names as relative.
it's nonsensical for you to unilaterally declare that such names are
relative, when well over two decades of practice indicates otherwise.
I didn't declare it;
Ted Faber wrote:
On Tue, Jul 08, 2008 at 02:17:57PM -0400, Keith Moore wrote:
The notion of a single-label fully-qualified DNS name being valid is
an odd one. DNS, as far as I can tell, was always intended to be
federated, both in assignment and lookup. The notion of having terminal
I don't think 1034 was handed down from a mountain on stone tablets.
It was not. But when other programs started using the DNS, it was *they*
that endorsed what the DNS as per that doc.
...but they also profiled it in various ways for use with that specific
app. Some apps define their own
Ted Faber wrote:
On Tue, Jul 08, 2008 at 05:11:35PM -0400, Keith Moore wrote:
And vanity TLDs are going to be much more attractive if people think
they can get single-label host names out of them.
Of your concerns (which I don't have the relevant experience to comment
on in detail
The (some) resolver handles names differently if it contains a dot.
The distinction that I have been unclearly stating is between absolute
and relative names. RFC 1034 (i said 1035 earlier, but it's 1034) lays
out a convention for specifying which one you want by appending the dot.
= according to Glen via RT (RT is a well known bug ticket system):
This check is in place at the direction of the IETF community, and has
been discussed and debated at length.
I don't recall the Last Call on that question, nor even the I-D.
seems like this calls into question the
Historically, the view has been that bloating the root servers in that
way would be very dangerous.
Counter-claims observe that .com seems to have survived with a similar
storage and traffic pattern, so why should there be a problem moving the
pattern up one level?
because it makes the root
The latest CAIDA study says:
* The overall query traffic experienced by the roots continues to
grow. The observed 2007 query rate and client rate was 1.5-3X above
their observed values in 2006
* The proportion of invalid traffic, i.e., DNS pollution, hitting the
roots is still high, over
Umm, hk. resolves to the same address from both our machines and is
pingable (modulo a single packet loss from yours, depending on how your
ping counts) from both. http://hk pulls up a web page on a machine with
the same address.
interesting. I had actually tried that URL before sending the
The site-dependent interpretation of the name is determined not by the
presence of dot within the name but its absence from the end.
not so. in many contexts the trailing dot is not valid syntax.
I don't buy unreliable as a diagnosis for that state of affairs. hk
operates exactly as any
[EMAIL PROTECTED] wrote:
I think I could have been clearer with my message. It wasn't intended as
either a criticism of the ietf list management (in fact, I use precisely the
same anti-spam technique) or a request for help with configuration of my
mailservers (I may not be the sharpest knife
surely we in the IETF should be able to do better than to have our mail
servers filter incoming mail based on completely irrelevant criteria
like whether a PTR lookup succeeds!
how can we expect the rest of the network to be sane if we can't even
use reasonable criteria for our spam filtering
Jeroen Massar wrote:
On Wed, Jul 02, 2008 at 10:47:53PM -0700, 'kent' wrote:
[..]
However, this last address, 2001:470:1:76:2c0:9fff:fe3e:4009, is not
explicitly configured on the sending server; instead, it is being
implicitly
configured through ip6 autoconf stuff:
Which (autoconfig) you
John Levine wrote:
surely we in the IETF should be able to do better than to have our mail
servers filter incoming mail based on completely irrelevant criteria
like whether a PTR lookup succeeds!
Spam filtering is sort of like chemotherapy, the difference between
the good
John Levine wrote:
that's hardly a justification for stupidity.
I entirely agree. Where we evidently don't agree is about what's stupid.
in this case, what's stupid is filtering mail based on arbitrary and
largely undocumented criteria, with little regard for the
consequences.for
Dave,
regardless of the original intent of 2119, your belief is inconsistent
with longstanding IETF process. you do not get to retroactively change
the intent of RFCs that have gained consensus and approval.
Keith
Dave Crocker wrote:
Randy Presuhn wrote:
In what universe does that
Computer science long ago made the mistake of imposing semantic
difference on
case differences, which is distinct from other uses of case. Absent
explicit
specification of case sensitivity for the keywords, the rules of
English writing
apply, I would think.
For better or worse, in IETF we
Dave Crocker wrote:
Yeah, this running code thing is over-rated.
indeed it is. many people are so accustomed to accepting the problems
with today's large-scale email operation that they fail to see how
things could be any other way. after all, it works...sort of.
It does have one
Dave Crocker wrote:
Tony Hansen wrote:
From this viewpoint, running code wins.
I'm also swayed by the principle of least surprise.
...
Last of all, I'm swayed by the discussions around RFC 974 and the DRUMS
archive search
...
So the bottom line is that I see sufficient support for
agree with most of what you said, however:
Since bad guys can deduce addresses by scanning --and will certainly do so
if we
make it sufficiently hard for them to use the DNS-- this type of
DNS change, it seems to me, would have little effect on the
antisocial.
note that scanning is a
Tony Finch wrote:
On Sat, 29 Mar 2008, John Levine wrote:
Getting rid of the fallback flips the default to be in line with
reality -- most hosts don't want to receive mail directly, so if
you're one of the minority that actually does, you affirmatively
publish an MX to say so.
I
So, a domain name erroneously appears in an address field and the references
host erroneously accepts mail it shouldn't.
This degree of problematic operation is not likely to get solved with a new
DNS
construct.
this specific problem will get solved with a simplification of how email
Frank Ellermann wrote:
John C Klensin wrote:
I don't believe that I seen any evidence of anyone who
had a strong position changing his or her mind since
the discussion started, nor have I seen a new argument
presented after the first few days.
The argument from somebody saying that his
operator action. It's too late to change the A behavior, but there
doesn't seem to be a reason to perpetuate this violation of the
principle of least surprise.
Henning
On Mar 29, 2008, at 10:34 AM, Theodore Tso wrote:
On Sat, Mar 29, 2008 at 10:16:10AM -0400, Keith Moore wrote:
I think
David Morris wrote:
On Fri, 28 Mar 2008, Keith Moore wrote:
If there were some serious technical consequence for lack of the MX record
I would be
all for specifying its use. Operational practice with A records shows that
there is no real issue,
only if you ignore the problems
and the dummy SMTP server works, but it consumes resources on the
host and eats bandwidth on the network. having a way to say don't
send this host any mail in DNS seems like a useful thing. and we
simply don't need the fallback to because we don't have the
backward
[EMAIL PROTECTED] wrote:
Let me throw another idea into the mix. What if we were to
recommend a transition architecture in which an MTA host
ran two instances of the MTA software, one binding only to
IPv4 addresses, and the other binding to only IPv6 addresses.
Assume that there will be some
Tony Hain wrote:
Your arguments make no sense. Any service that has an MX creates
absolutely no cost, and the fallback to only makes one last
attempt to deliver the mail before giving up. Trying to force the
recipient MTA to publish an MX to avoid delivery failure on the
sending MTA is
That is all well and good, but it is completely of value to the receiving
MTA, and under their complete control. There is nothing that requires a
receiving MTA to follow this model, despite what others may see as value.
well, if you want to receive mail from other domains without special
should do in the
absence of any such knowledge.
Ned Freed wrote:
--On Tuesday, 25 March, 2008 23:18 -0400 Keith Moore
[EMAIL PROTECTED] wrote:
You know, that's a very interesting point. One of more common
configuration variations we see is to disable MX lookups and
just use address records
Markku Savela wrote:
The goal should be to make IPv4 and IPv6 easily replaceable anywhere,
whithout any reduced functionality.
I don't see how requiring MX lookups for IPv6 mail relaying reduces
functionality. As far as I can tell, it increases functionality because
it provides (as a
SM wrote:
We could look at the question by asking whether the fallback MX
behavior should be an operational decision. But then we would be
treating IPv4 and IPv6 differently.
IPv4 and IPv6 really are different, in a number of ways that affect both
applications and operations. e.g.
Frank Ellermann wrote:
Bill Manning wrote:
example.com. soa (
stuff
)
ns foo.
ns bar.
;
mailhost fe80::21a:92ff:fe99:2ab1
is what i am using today.
In that case adding an MX record pointing to mailhost
or not is perfectly irrelevant from an IPv4-only POV:
Frank Ellermann wrote:
Keith Moore wrote:
IPv4-only hosts can see the record even if they can't
directly send mail to that address. and there's no reason
(obvious or otherwise) why a MTA should reject mail from
a host just because that MTA can't directly route to it
What I
John Levine wrote:
Not to be cynical or anything, but regardless of what we decree, I
think it's vanishingly unlikely that many systems on the public
Internet* will accept mail from a domain with only an record.
I think it's vanishing unlikely that email will be useful at all, if
spam
John Levine wrote:
Not to reignite the usual spam argument, but it is (unfortunately in
this case) not 1988 or even 1998 any more. When upwards of 90% of all
mail is spam, keeping mail usable is at least as dependent on limiting
the spam that shows up in people's mailboxes as delivering
This document is not the place to fight spam. If you want a BCP to deprecate
the fall-back, then write one. Until all the implementations remove
fall-back to A, the correct behavior is to also fall-back to . People
(particularly the apps dev support communities) are having a hard enough
Frank Ellermann wrote:
Keith Moore wrote:
you're assuming lots of conditions there that don't apply
in the general case...
The cases involving MX queries are not unusual, a good part
of 2821bis explains how this works. MTAs know if they are
an inbound border MTA or not depending
Dave Crocker wrote:
I keep trying to understand why the SMTP use of records should be any
different than its use of A records. Haven't heard a solid explanation,
nevermind seen consensus forming around it.
because conditions are different now than it was when RFC 974 was
written,
Ned Freed wrote:
er, NO. SMTP has no dependence on what may or may
not exist in the DNS. Forcing SMTP to depend on DNS
is a huge mistake. And yes Virginia, I plan on removing
A rr's from my zones (eventually)
You know, that's a very interesting point. One of more
Ned, by disable MX lookups, do you mean don't put MX records
into the DNS zone and therefore force a fallback to the address
records or ignore the requirement of the standard that
requires using MX records if they are there? If the latter,
the behavior, however useful (or not) is, IMO,
FWIW, my opinion is that if you want to accept incoming mail via IPv6,
you need to advertise one or more MX records that point to ipv6-capable
hosts.
Treating A records (in the absence of MX records) as implicit MX
records was a hack needed to avoid forcing everyone to advertise an MX
record
My point is that this is a solved problem in practice. Probably not
solved in the way you or I would like, but solved nevertheless.
Perhaps the meaning of solved has changed over the years :)
Several years ago I saw a sign that read:
Mediocrity is excellence at pursuing the mean.
So, I think it would be good to define IPv6 NAT behavior, and do so
NOW before its too late, and define it in a way that it would appeal
to the admins that have deployed IPv4 NAT today.
There are at least five things about IPv4 NAT that are not now and never
were workable:
(1) Changing
I hate to rain on the parade, but...
1. ULAs will give enterprises the addressing autonomy that they
seek (as RFC 1918 addresses do with IPv4)
Correct. That's available today.
; but that 2. Enterprises will NOT need to use NAT to make those
ULAs globally reachable (instead using work
I think that's a pretty bizarre way to measure IPv6 deployment. The
_last_ applications to support IPv6 will be the widely popular apps that
depend on an extensive infrastructure of servers that are currently
associated with IPv4. Email and the web both fall into this category.
And as long as a
Rémi Després wrote:
Keith Moore wrote :
you're essentially making the assumption that all apps are
client-server - i.e. that the session is always initiated in one
direction across the NAT box. that's one of the biggest problems
with traditional NATs, and is part of what makes NAT-PT
Rémi Després wrote:
Keith Moore a écrit :
IPv4 mapped addresses (those of the form :::{ipv4 address}) should
never appear on the wire. Embedding an IPv4 address within an IPv6
address might make sense in certain cases, but it doesn't work in general.
If using them on the wire
Iljitsch van Beijnum wrote:
it is also necessary for servers on IPv6 only networks to be able to
have presence on the public IPv4 internet. how else (for example) can
IPv6-only networks accept incoming email from v4 only networks and/or
serve web pages to v4-only clients?
In principle I
The traditional NAPT model which can only permit outgoing
sessions is not sufficient for an effective transition from v4 to v6.
Here we differ. I find that to be good enough for the vast majority
of cases, and in those cases where it doesn't work one can deploy
native access so that the
As far as I'm concerned, NAT-PT is defined in RFC 2766, and describes a
particular way of translating between IPv4 and IPv6. If people are
using NAT-PT in a different way, that's unfortunate. But I don't
consider it a major problem because NAT-PT really isn't usable anyway,
for two sets of
Alain Durand proposed in 2002 :
- NAT64 for IPv6 - IPv4
- NAT46 for IPv4 - IPv6
Practically speaking, any box that translates between v4 and v6 has to
be able to translate in both directions. Which side is to and which
is from then? You don't want to make the assumption that the apps are
Rémi Després wrote:
Keith Moore a écrit :
Alain Durand proposed in 2002 :
- NAT64 for IPv6 - IPv4
- NAT46 for IPv4 - IPv6
Practically speaking, any box that translates between v4 and v6 has to
be able to translate in both directions. Which side is to and which
is from
I'm no longer an AD; if I were, my attitude would be simple: the IETF
has decided, as a group, that patented technology is acceptable.
There's no point to reopening the question every individual document.
+1
though I'd probably phrase this differently, e.g.: the IETF has decided,
as a
501 - 600 of 1815 matches
Mail list logo