In this case 'the idea' was we would like an option code point
assigned.
if that's all there is to the proposal, it should be rejected
out-of-hand without discussion.
who said that's all there was to the proposal? there was clearly a
proposed use. but the use factors into the question only
Hans Kruse wrote:
But the refusal of a code point is not effective, and in fact
counter-productive (since the option will indeed be deployed, you just
won't know what code point it self-assigned).
that's not true in general. each situation is different.
the alternative - to blindly assign
we need to allow IESG to use some discretion here.
what *kind* of discretion?
should we allow the IESG the discretion to decide what they like or
don't like and then allow them the authority to make the decision based on
that?
IESG should have discretion to evaluate such
Date:Thu, 30 Jun 2005 21:12:07 -0400
From:Jeffrey Hutzelman [EMAIL PROTECTED]
Message-ID: [EMAIL PROTECTED]
| Note that I would consider it entirely reasonable for the IESG to say
that
| something conflicts with work in the IETF on the grounds that its
Nothing like responsibility to look after the overall technical health of
the
Internet was assigned to the IESG.
You seem to be forgetting something, Dave.
Every IETF participant is supposed to use his best engineering judgement
as to what is best for the Internet as a whole, when making
You seem to think that every IETF participant _except_ those on IESG
should do so. You seem to think that everyone else should be able to
exercise their judgement but that the IESG should just serve as
process facilitators and rubber stamp technical decisions that others
make.
The problem is that the IETF, and the IESG in particular, sees a protocol,
sees it is planned to be used with internet related protocols, and so
perhaps on some part of the internet, and decides that's ours, we must
be the ones to decide whether that is any good or not, now how do we force
that
Keith,
The IESG can still exercise their best engineering judgment as
individuals, as the rest of us do.
The IESG role itself need not incorporate a privileged position from
which to wield that judgement. There's plenty left to do.
Joe,
The IESG has several duties that are defined in RFC
If IESG people were to personally benefit from their exercise of this
privilege you'd have a valid gripe.
Personal gain is not the only motive; power can be its own motive. The
gripes are validated by cases of abuse of privilege.
If there's no obvious personal interest, whether a particular
2026 separates process management from _independent_ technical review,
IMO for good reason.
I think you're reading more emphasis on independence than was intended
in 2026. But this is also subjective.
History reminds us of the abuses of power that started with act first,
appeal later.
External reviews are what I'm favoring - external, independent reviews.
so when IESG provides the external review, that's bad, but when someone
else does external review, that's good? I disagree. part of IESG's purpose
is to do review from a broad perspective.
External reviews are what I'm favoring - external, independent reviews.
so when IESG provides the external review, that's bad, but when someone
else does external review, that's good?
Yup. When judges decide cases, that's bad. When juries do, that's good.
not necessarily. judges can
Filtering can always be done, that is the right of the network
administrator doing the filtering. That some users won't like it is
indeed an issue, but that is political and not technical.
Thinking of this as a right is the wrong way to think about it. A
more relevant question is whether
Please don't reuse the word security for all three of these issues.
They're very different. I agree that the IETF should do more against
spam and DDoS. The trouble with spam is that there is simply no
consensus to be reached, and the IETF doesn't have any mechanisms to
move forward when
The reason that there is no consensus in the spam area is that most
proposed solutions are claiming to solve the whole problem (or
at least a big chunk of it) but are grossly overstating their
applicability. To some degree this is because people want to have
the prize of creating _the_
I actually think IETF might function better if nobody's badge had his
company's name on it, and nobody used a company email address. People
place way too much importance on someone's employer. Yes, sometimes
people break the rules and speak for their employers, but it's not wise
to assume
What is this document for? No one has implemented this LLMNR protocol. No
one has any plans to implement this protocol. No company plans to ship
products using this protocol. Even Microsoft has not even hinted at plans
to use LLMNR in Longhorn/Vista.
I don't see anything in RFC 2026
I'm sending this to the ietf list because mailman is widely used for
IETF mailing lists.
Recent versions of mailman appear to have a flaw that allows the sender
of a message to send a copy to everyone in a mailing list _except_ some
set of explicitly specified recipients, and there will be no
I am perhaps just being slow and dim-witted after minor surgery, but why
should a protocol that no-one will use be standards track ?
Why should we accept a few (mostly axe-grinding) peoples' assertions
that no-one will use it?
Keith
___
Ietf
I agree that getting authentication into the email protocols is a good
thing, but TLS does not achieve much more than SPF/Sender-ID in that
respect. DKIM is a much better platform.
Not clear. As currently envisioned, DKIM doesn't address phishing
because it basically says I saw this message
Mark Baugher wrote:
I don't necessarily see the two needing to be combined so closely.
perhaps not, but if DKIM doesn't address either phishing, spam, or
viruses, and it doesn't authenticate content authorship, what good is it?
___
Ietf mailing
perhaps not, but if DKIM doesn't address either phishing, spam, or
viruses, and it doesn't authenticate content authorship, what good is it?
I said that I don't think phishing should be combined so closely with
the problem of spam. But you throw in viruses and claim that it doesn't
address
Dave Singer wrote:
The whole idea that 'real DNS' can arbitrarily pre-empt local name
resolution seems, well, wrong, and needs serious study for security
implications for the services using those names, no?
The whole idea that local names should look like DNS names and be
queried through the
The whole idea that local names should look like DNS names and be
queried through the same APIs and user interfaces seems, well,
wrong (or dubious at best), and needs serious study for the
implications of applications using those APIs and the impact of
such names on DNS, no?
No. Or at
Specifically, when I first became associated with you all in 1992, the
RFCs of most IETF standards were incomplete and the reference
implementations (i.e., running code) were what was considered to be
normative.
I've been involved with IETF since circa 1990 and have always been of
the
Keith,
I resonate with your points except that the earliest IETF standards
(i.e., IP itself, TCP itself, others) were incompletely specified by
RFCs. Therefore, interoperable implementations could only occur with
reference to the reference implementations.
I don't know what reference
Bill Sommerfeld wrote:
I like Keith's terms MON / MRN (mail originating / receiving
networks) better, because seen as sets of systems they can be
different. An outsourced backup MX would be still a part of
the MRN (if I got this right), but not belong to the same
COG. MUAs also
On Nov 30, 2005, at 2:54 PM, Frank Ellermann wrote:
As Bob said raw UTF-8
characters won't fly with `cat rfc4567 /dev/lpt1` and other
simple uses of RFCs.
1. I wonder what proportion of the population prints things this way?
2. If the file is correctly encoded in UTF-8 and the above
On Dec 1, 2005, at 12:16 PM, Keith Moore wrote:
Also, the vast majority of printers in use don't natively support
printing of utf-8, thus forcing users to layer each of their computer
systems with more and more buggy cruft just to do simple tasks like
printing plain text. Perhaps those
Once people started to use foliation for citations they started to see
the disadvantages. Editions of the bible were particularly problematic
once people started attempting to cross reference translations back to
the original text. This was a particular problem with the old testament
as some
On 12/1/05, Hallam-Baker, Phillip [EMAIL PROTECTED] wrote:
On a point of information, most of the references I see in existing RFCs
are to sections in any case.
I suspect this is because almost everyone refers to an HTML version in
informal communication. But, I actually agree with
Why do you consider the TXT version to be authoritative if as you admit
the HTML version is the one that is read by reviewers and readers?
I don't think that's actually true. The TXT versions are not only
authoritative, they're also the ones available from official sources.
And personally I
I have, in the past, argued to the IESG that I did not think the SPF
I-D should be marked Experimental because I did not see it being an
experiment. It has been out for 2 years now and it is far too widely
deployed to make significant changes. Instead, I thought it should be
standard track.
If we standardize a technology, we are saying that technology solves
some problem. and that its usage has well understood and accepted
consequences.
It has to be acceptable to *the people who are involved*. People who
are doing things contrary to contracts that they have signed, TOSes,
If we standardize a technology, we are saying that technology
solves some problem. and that its usage has well understood
and accepted consequences.
Ergo since Microsoft and many others have already said they are going to
be using PRA type techniques it follows that they had better be
From: Keith Moore [mailto:[EMAIL PROTECTED]
However that doesn't mean that IETF should necessarily
endorse, or even
document, any of these technologies.Sometimes it's a disservice to
the community to document a bad idea.
The protaganists in this matter have already conceeded
Keith However that doesn't mean that IETF should necessarily
Keith endorse, or even document, any of these technologies.
Keith Sometimes it's a disservice to the community to document a
Keith bad idea.
I'm from the school that believes documenting existing behavior is
always
Dave Crocker wrote:
IETF should not make it more difficult for the Internet to adapt to
changing conditions by standardizing protocols that only work in a
narrow set of conditions - even when those conditions are reflected
in some providers' current contracts or policies.
Like ARP?
I wasn't
Burger, Eric wrote:
In Vancouver in the lemonade work group meeting a number of people
expressed interest in the creation of a list dedicated to the discussion
of user notification technology.
This list is for discussions relating to the requirements, definition,
and directions for message
Given all (2)822 header fields as they are, ignoring the trace
header fields, the PRA algorithm is the only possible way to
divine a purported responsible address from this zoo. It's
not new, it's read and understand RFC 822 as published 1982.
actually, no. It's a perversion of RFC 822 which
Bob Braden wrote:
clean
*
* Dave Crocker wrote:
* IETF should not make it more difficult for the Internet to adapt to
* changing conditions by standardizing protocols that only work in a
* narrow set of conditions - even when those conditions are reflected
* in some providers'
I believe this looks OK, but CypherTrust IronMail sees this line as a
MIME violation:
Content-Type: application/octet-stream;
boundary==_622_1135029170_19710
CypherTrust says the same boundary cannot be redefined.
the error message is misleading, and it probably does indicate a bug in
As I argued on the DKIM working group list, I think this text is a bad
idea. Part of IETF having change control of a specification is having
the ability to make changes, and the bar of necessary to the success of
the specification is way too high for that. Note that I'm not
suggesting
Personally, I think each on-the-face-of-it-reasonable suggested
improvement has to be considered, but the more time passes and
the more the specifications are mature, the higher the bar is
raised. Since these specs. have been around a while and have
been implemented it seems reasonable to
1. As interesting as such a discussion might be, it has no effect on the
technical work. The choices made were the choices made. The goal is to
make as few new ones as we can, not to spent time reviewing past
choices.
That is almost never an appropriate goal for an IETF working group
the specification is way too high for that.
Too high for what? Instead of arguing principles Eric, needs to
indicate what specific technical work that is excluded by this language
is actually essential to the goals of DKIM.
Dave,
You keep making statements like that without a shred of
Since experimentation resulted in significant Internet deployment of these
specifications, the DKIM working group will make every reasonable attempt
to
keep changes compatible with what is deployed, making incompatible changes
only
when they are necessary for the success of the
It's also a principle of good engineering that you don't make unnecessary
changes to deployed code.
I think that's an overgeneralization. There's neither a wide enough
deployment of DKIM, nor sufficient evidence of DKIM's suitability for
the diverse user community, for the current DKIM
It's a significant precedent that IETF charters have included language
of this sort when there has been a deployed base at the time the WG is
chartered. But can someone explain what's different in this wording
that warrants departing from the version on which there seems to be
rough
Although not encouraged, non-backwards-compatible changes to the
basis specifications will be acceptable if the working group
determines that the changes are required to meet the group's
technical objectives and the group clearly documents the reasons
for making them.
I
Related to how much the charter pre-supposes the solution, the sentence
that Public keys needed to validate the signatures will be stored
in the responsible identity's DNS hierarchy. seems like a pretty heavy
constraint on the possible solutions and one that some proposals
disagreed with.
I
The question that I think IESG should be asking themselves is how is this
similar and/or different from other groups the have chartered or will in the
future. Nearly every group has some people with a fairly strong idea of at
least one way to solve the problem. Without this, it is usually not
If there were an I-D describing such a protocol, I'd be interested in
reading it - is there one?
Not yet. But it hardly seems like the time to write an I-D when there
is at present considerable effort being invested to preclude such an I-D
being considered by the group.
Keith
Keith Moore wrote:
OTOH, the assumption that _all_ public keys used to validate DKIM
signatures will be stored in DNS is a very limiting one, because it
appears to lead to either
a) a constraint that policy be specified only on a per-domain basis
(which is far too coarse for many
Why force someone to listen to the whole
messaging/applications/rai/weather/snmp/smtp/ldap/foobar discussion if
all they are interested in is the limited topic of notifications?
I have no problems with insisting that posts to the list be in some way
about notifications, or about how a
Apparently you expect the extensive, open group consensus process that
was pursued TWICE on this matter to have no import, but the last-minute,
vague whim of a few posters instead should hold sway.
Dave,
Unless you can cite an IETF BCP RFC that authorizes unchartered,
self-appointed, ad hoc
It seems like the more efficient approach would be to essentially have two
stages, where the authors first sign off on the result of copy-editing, and
then on whatever cosmetic changes are needed after the final conversion.
That assumes that the xml-nroff conversion is always error-free.
Dave Crocker wrote:
when an issue is raised repeatedly, by many different people, it almost
always has some degree of inherent legitimacy. that makes it worth
attending to.
some tactical problems have strategic impact. in this case, decisions
which well might serve to make the ietf less
Well, I'm not claiming that latency isn't a factor in protocol
performance. What I'm claiming is that it's not clear that latency
in the initial connection setup handshake (in this case the TLS
one) is a major factor in protocol performance. Recall that the
way that TLS works is that you do an
when editing documents that purport to describe existing practices and
protocols, there is often a conflict between documenting existing
practice and describing desirable practice (or undesirable practice).
this conflict results in confusion of goals, and one possible result is
that the document
The whole idea of fixed ports is broken.
The idea that there are only 1024 or even 65535 Internet applications is
broken.
agree with you so far.
The Internet has a signalling layer, the DNS. Applications should use it.
strongly disagree. DNS is a huge mess. It is slow and unreliable.
Harald The IESG pointed some of the issues out to the RFC Editor, who handled
Harald the communication with the author; that was the procedure at that
time.
Harald Nevertheless, the RFC Editor felt that the document was worthy of
Harald publication, and published anyway.
As the written
You have totally confused ESRO with EMSD.
RFC-2188 is different from RFC-2524.
I stand corrected.
Tony gets it:
Tony The point is that the past IESG practice which has driven out those
who
Tony would submit individual submissions, resulting in the current ratios,
MUST
Tony NOT
I will however caution against the assumption that IESG is inherently
overbearing and a separate review function is inherently more
reasonable. No matter who does the review there will always be the
potential for capriciousness on the part of the reviewer.
It seems to me that while
It's not at all clear to me that we can afford the resources to give the
privilege of appeal to mere individuals.
Excuse me? What do you think IETF is or do you really prefer to see it
officially turn into IVTF?
IETF is, or should be, an engineering organization. Not a vanity press.
IETF
Refusing new registrations is what I meant by closing the registry.
This would be a disaster. It would mean that application designers
would just pick ports at random (some do this already) and there would
be no mechanism for preventing conflicts.
Regarding SRV, it's not acceptable to
Dns is essential already.
false. but even to the extent that this is true, this is a bug, not a
feature.
Firewalls can cope
but new applications can't.
Keith
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf
you shouldn't allow unrestricted access to the network from unmanaged
hosts, that's a recipe for disaster.
no, what's a disaster is to use source IP addresses or port numbers as
an indication of trustworthiness on any network that extends beyond a
single room. the notion that you can manage
- Conclusion 2: There is no reason for standards to uphold the
distinction between 1024 and 1024 any more.
I agree that the requirement on UNIX-like systems to be root in order
to bind to ports 1024 is, in hindsight, a Bad Idea - but mostly
because of insufficient privilege granularity. I
We have to work with what we have here, that unfortunately means original dns
plus the srv record.
I never cease to be amazed at people who insist on taking things that
basically work fairly well and replacing them with more complex
mechanisms that are known to work more slowly and less
It's the concept of well-known ports that's broken, not the provision for 65K
ports.
offhand I don't see why we need two kinds of names for services,
because that creates the need for a way to map from one constant to
another - and that mapping causes failures which seem entirely
unnecessary.
We need to calculate the average cost of IETF hosted in all the continents,
and that cost is the one that need to be put on the table by any
sponsor/host regardless of where the meeting is actually going to be hosted.
my mind just boggled. or my bogometer just pegged.
no, this does not seem
What I think Jordi is saying is that he wants the US sponsors to
subsidize the cost of the overseas meetings. At least that's what it
works out to be
Well, that's how I interpreted it also. What I found mind-boggling was
the idea that companies that volunteer to host one meeting would
So you mean you think is reasonable and fair going for the cheapest even if
every time more and more people can't attend because a government decides
not to grant visas ?
you're conflating two problems - cost and immigration laws.
having fewer meetings in the US is a reasonable response to US
Except of course that many of the US Sponsors are in fact global
companies anyway. Think about the list of recent and future sponsors.
sure, but the sponsors get some leeway in where meetings are held (since
we're more likely to hold a meeting in an area where someone is willing
to sponsor
It will also be a more open process. Today, in my opinion, having to
negotiate with each possible sponsor in secret, is a broken concept, and
against our openness.
I'm a lot more concerned about openness in IETF protocol development.
some kinds of negotiations really do need to be done in
I know you've heard this all before, but it's been getting
increasingly difficult for us WG Chairs to get all the key
people working on a protocol to fly across the planet for
a 2 hour meeting. These are busy people who can't
afford to block out an entire week because they don't
know when or
One option however would be to seek 'partnerships' between vendors and
the IETF that span more than one meeting. Unless that impacted the
perceived 'neutrality' of the IETF and its standardisation processes.
I suspect that this would indeed be a question.
To invoke a particularly
There are quite a few really good ideas for improvements to IETF
productivity.
The problem with taking a particular suggestion and then adding others to
it
will be that nothing gets considered in detail and nothing gets done.
you say that like it's a bad thing.
not to pick on
On Fri Mar 24 17:47:04 2006, Keith Moore wrote:
I think that Dave's message reflects a common frustration in IETF
that we talk a lot about particular problems and never seem to do
anything about them. When people express that frustration, they
often seem to think that the solution
I think that Dave's message reflects a common frustration in IETF
that we talk a lot about particular problems and never seem to do
anything about them.
Quite so, which is why most of us feel that there should be
a strong bias in favor of action and experimentation rather
On Fri Mar 24 19:50:15 2006, Keith Moore wrote:
In other words, there are working groups where a substantial
number of people involved in the discussion are not only not
going to be implementing the proposals, but don't actually do any
kind of implementation within the sphere
maybe this is because protocol purity zealots take a long
term view and want to preserve the flexibility of the net
market to continue to grow and support new applications,
whereas the NAT vendors are just eating their seed corn.
Your long term view is irrelevant if you are unable to
maybe this is because protocol purity zealots take a long term
view and want to preserve the flexibility of the net market to
continue to grow and support new applications, whereas the NAT
vendors are just eating their seed corn.
Your long term view is irrelevant if you
Your long term view is irrelevant if you are unable to meet short term
challenges.
very true. but at the same time, it's not enough to meet short term
challenges without providing a path to something that is sustainable in
the long term.
This is reasonable, but there is no realistic
So the real question is: Given NAT, what are the best
solutions to the long term challenges?
A protocol that would be only v4 with more bits in the first place, with
routers / NAT boxes that would pad/unpad extra zeroes (also including
extra TBD fields). As this would be 100% compatible with v4
This is reasonable, but there is no realistic path to ipv6 that the
known world can reasonably be expected to follow.
That's because people keep thinking that there needs to be a path from
IPv4 to IPv6 that makes sense for all applications. No such path
exists, because applications
In this case the benefit to running NAT on my home network is that it saves
me $50 per month in ISP fees, means I have wireless service to the whole
house and means that guests can easily connect.
one immediate benefit to my running IPv6 on my home network is that I
can access any of my
NAT is a dead end. If the Internet does not develop a way to obsolete
NAT, the Internet will die. It will gradually be replaced by networks
that are more-or-less IP based but which only run a small number of
applications, poorly, and expensively.
...or you will see an overlay network build
From: Keith Moore moore@cs.utk.edu
NATs do harm in several different ways
It's not just NAT's that are a problem on the fronts you mention, though:
yes, there are other things that do harm besides NATs.
however, that's not a justification for NATs
IP addresses currently serve two completely separate functions: they
identify *who* you are talking to, and they identify *where* they are.
there's a tad more to it than that which is essential:
in a non-NATted network, IP addresses identify where a host is in a way
that is independent of the
The other side of the coin is the fact that many devices will effectively
require no more than a /128 because of the way they connect up to the
network. For example cell phones will be serviced on plans where the
subscription fee is per device.
it's perfectly reasonable to connect a small
You didn't mean locators are a lot easier to deal with if the name
has nothing to do with where the thing it names is, you meant
locators are a lot easier to deal with if their meaning (i.e. the
thing they are bound to) is the same no matter where you are when
you evaluate them.
This is a
it would be okay if the only apps you needed to run were two-party
apps. in other words, it's not just users and hosts that need
addresses to be the same from everywhere in the network - apps need
stable addressing so that a process on host A can say to a process on
host B, contact this
it would be okay if the only apps you needed to run were
two-party apps. in other words, it's not just users and hosts
that need addresses to be the same from everywhere in the
network - apps need stable addressing so that a process on host
A can say to a process on host B, contact this process
Point made many times, and the proof is in the pudding: if they're using
it so widely it means it works for them.
Actually, no. The world is full of examples of practices and mechanisms
that are widely adopted and entrenched that work very poorly. You only
have to look at any day's
- DNS is often out of sync with reality
Dynamic DNS updates are your friend.
From an app developer's point-of-view, DDNS is worthless. DDNS is far
from universally implemented, and when it is implemented, it's often
implemented badly. DDNS can actually makes DNS a less reliable source
of
I find myself wondering, don't they get support calls from
customers having to deal with the problems caused by the NATs?
Because they don't answer them. In the process of doing the
work that led to RFC 4084, I reviewed the terms and conditions
of service of a large number of ISPs in
Number portability,
after all, only requires a layer of indirection. We can certainly
engineer that!
And we have. It's called the DNS.
no it's not. DNS sucks for that. it's too slow, too likely to be out of sync.
DNS names are the wrong kinds of names for this. the protocol is poorly
Keith Moore wrote:
...
I agree with Christian. we can build indirection into the routing
infrastructure. it's probably the right way to go.
One could argue that we already have. It's called MPLS...
sort of. MPLS with globally-scoped tags, and a database of
coarse (think subnet
801 - 900 of 1815 matches
Mail list logo