Re: Gen-ART review of draft-ietf-sidr-bgpsec-threats-06

2013-10-02 Thread Stephen Kent

David,


Steve,

I think the modified introduction text suffices to connect the PATHSEC 
and BGPsec terms, but I don't think that referring to the SIDR WG 
charter for the PATHSEC goals is reasonable -- an RFC is an archive 
document, whereas a WG charter is not.



The revised intro text now paraphrases the text from the SIDR charter that
describes the path security goals.

Steve


Re: Gen-ART review of draft-ietf-sidr-bgpsec-threats-06

2013-10-01 Thread Stephen Kent

David,

Since this doc logically precedes the BGPsec design, I still think it's 
appropriate to
use PATHSEC here. But, we can add a sentence to connect the terms. I 
propose this modified text for the introduction:


*This document describes the security context in which PATHSEC is 
intended to operate. **(The term PATHSEC is employed in this document 
to refer to any design used to achieve the path security goal**described 
in the **SIDR WG charter. **The charter focuses on mechanisms**that will 
enable an AS to determine if the AS_path represented in a 
route**represents the path via which the NLRI traveled. Other SIDR 
documents use

the term BGPsec to refer to a specific design.) ...
*
The phrase calls for seems appropriate in the cache discussion. There 
is no MUST in the RFCs about using a local cache. The docs encourage RPs 
to maintain a local cache,
and 6481 states that not using one is NOT RECOMMENDED.  All of the RP 
software of which

I am aware does so, but it is not an absolute requirement.

I think we've agreed that quoted is a static assertion and thus need not be
annotated to reflect more recent RFCs.

Steve






Re: [karp] Gen-ART review of draft-ietf-karp-crypto-key-table-08

2013-08-15 Thread Stephen Kent

David,

I agree with Sam here. The key table is analogous to the SPD in 4301, 
but not

the PAD.

Another doc being developed in the KARP WG does have a Routing 
Authentication Policy
Database (RAPD) that incorporates aspects of the PAD from 4301, as well 
as some

SPD fields.

Steve


[sidr] Last Call: draft-ietf-sidr-algorithm-agility-08.txt (Algorithm Agility Procedure for RPKI.) to Proposed Standard

2013-01-04 Thread Stephen Kent
The tech report cited in Eric's message is not a critique of the SIDR 
algorithm agility
document that is the subject if this last call. The tech report is a 
critique of the overall
SIDR repository system and object retrieval paradigm, with an emphasis 
on the speed with which
relying parties (principally ISPs) will be able to acquire RPKI data. 
The RPKI repository
system is defined in RFC 6481; the RP object retrieval approach is 
described in RFC 6480.
The tech report includes assumptions about the addition of many 
instances of additional objects
(router certs) to the RPKI repository system, but these assumptions are 
based on I-Ds that are in process in SIDR, and thus may be the more 
appropriate focus of the report, in terms of responses.


The tech report includes no specific criticisms of the algorithm agility 
mechanism described by the I-D in IETF LC, nor does it suggest any 
changes to this doc. An extensive discussion of the tech report took 
place on the SIDR list, in early December. That discussion also did not 
suggest any proposed changes to the algorithm agility doc. Thus the 
authors do not plan to make any changes as a result of this comment 
being posted during IETF LC.


Steve Kent (on behalf of Roque Gagliano and Sean Turner)



Re: [Gen-art] Gen-ART review of draft-ietf-dnsop-dnssec-dps-framework-08

2012-07-18 Thread Stephen Kent

Joe

You're right, I did miss your point, quite thoroughly :-)

I am guessing that the answer is that there's no corresponding facility in 
DNSSEC to for a policy identifier to be published with a DNSKEY RR, but I say 
that largely ignorant of X.509 and attendant CA policy and hence perhaps am 
still misunderstanding what you're looking for.


In X.509 each cert can contain a policy OID that indicates the policy 
under which the cert was issued. Thus, when a CA changes it's policy it 
can issue certs under the new policy with the new policy OID. This makes 
it clear to relying parties what policy is in effect, and when a CA 
changes its policy, irrespective of

other changes, e.g., key rollover.

Steve




Re: Last Call: draft-ietf-sidr-rpki-rtr-19.txt (The RPKI/Router Protocol) to Proposed Standard

2011-12-19 Thread Stephen Kent

The IESG has received a request from the Secure Inter-Domain Routing WG
(sidr) to consider the following document:
- 'The RPKI/Router Protocol'
  draft-ietf-sidr-rpki-rtr-19.txt as a Proposed Standard

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
ietf at ietf.org mailing lists by 2011-12-13. Exceptionally, comments may be
sent to iesg at ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.



I am familiar with this document and support its advancement.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call draft-sprecher-mpls-tp-oam-considerations-01.txt (The Reasons for Selecting a Single Solution for MPLS-TP OAM) to Informational RFC

2011-10-10 Thread Stephen Kent

I support this doc, and concur with Stewart's comments.

Contrary to what some have suggested, we sometimes (ofttimes?) have more than
one standard for no good technical reason. Sometimes very large, 
competing companies back different standards for parochial reasons, 
to the detriment of consumers, service providers, etc. This appears 
to be one of those cases. Moreover, not opposing a two-standard 
approach sends a bad message, and encourages similar, bad behavior in 
the future.


As the co-chair of PKIX, which has two standards for cert management 
(CMC and CMP), for exactly the bad reasons I cite above, I am 
intimately familiar with this sort of problem.  I failed, in my role 
as PKIX co-chair, to prevent this in that WG.  Let's not repeat that 
sort of mistake here.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-04 Thread Stephen Kent

At 6:07 PM -0400 5/3/11, Sam Hartman wrote:

  Stephen == Stephen Kent k...@bbn.com writes:



 I guess the only question I'd have remaining is whether ROAs or
 other signed objects are intended to be used in other protocols
 besides simply living in the SIDR repository?

Stephen The RPKI repository is designed to support a specific,
Stephen narrow set of apps. That's what the CP says, and we try to
Stephen make these certs unattractive for other apps, e.g., by use
Stephen of the non-meaningful names.

You had mentioned that about the PKI before.  Now, though I'm focusing
on the ROAs and other signed objects, not the certificates and CRLs.  Do
these narrow applications involve simply storing these objects in the
repository, or are there plans to use ROAs or other signed objects as
elements in protocols?  At least years ago, for example, there was
discussion of carrying signatures of objects in BGP. I understand that's
not within SIDR's current charter, but is SIDR intended to support that
style of use, or have things been narrowed to a point where that would
require reworking details of the repository and PKI?

If the answer is that those sorts of uses are not in scope for the SIDR
architecture, then I think you've basically resolved my concerns.


Sam,

You might want to look again at the SIDR charter, since it has just 
been revised to include BGP path validation. The path validation 
approach being pursued makes use of the RPKI, consistent with the 
scope of the CP, not surprisingly.


The BGPSEC protocol being defined does not pass around ROAs or other 
RPKI repository objects. It defines two new, signed objects that are 
passed in UPDATE messages, and are not stored in the repository. 
These objects are verified using RPKI certs and CRLs, so there is a 
linkage.


Does that answer your question?

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-04 Thread Stephen Kent

At 7:48 AM -0400 5/4/11, Sam Hartman wrote:

  Stephen == Stephen Kent k...@bbn.com writes:

Stephen The BGPSEC protocol being defined does not pass around ROAs
Stephen or other RPKI repository objects. It defines two new,
Stephen signed objects that are passed in UPDATE messages, and are
Stephen not stored in the repository. These objects are verified
Stephen using RPKI certs and CRLs, so there is a linkage.

OK, so how will the upgrade work for these signed objects?  In
particular during phase 2, when both old and new certs (under the old
and new profile) are in use, what happens with these signed objects?
Can a party generate both old and new signed objects? If so, will the
protocol scale appropriately?  If not, how does a party know which
signed object to generate?


Sam,

The BGPSEC protocol will have to accommodate changes in the algs used 
to validate BGPSEC signed objects, and changes in algs used to 
validate RPKI objects, and key (not alg) changes in the RPKI objects, 
especially the certs associated with routers. So, format changes are 
just another example of a change in the RPKI that BGPSEC will have to 
accommodate. This is a legitimate discussion point for the BGPSEC 
protocol design discussions that will take place in SIDR. It is out 
of scope for the current set of docs, since they deal only with 
origin AS validation.


It would be inappropriate to suggest delaying this doc (or to suggest 
changes to it) based on discussions that will take place in the 
future, for a protocol that is just being adopted as a WG item now.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-04 Thread Stephen Kent

At 10:32 AM -0400 5/4/11, Sam Hartman wrote:

 ...

Let me see if I can summarize where we are:
You've describe an upgrade strategey for the origin validation in the
current set of docs. It depends on the ability to store multiple certs,
ROAs and other objects in the repository.


requirements that already exist to accommodate key rollover and alg 
transition for the RPKI. We have a SIDR doc describing both key 
rollover,



You agree that when SIDR looks at using RPKI objects in the newly
adopted work it will need some upgrade strategy for format, keys and
algorithms.  There are probably a number of options for how to
accomplish this. Even if the working group did decide to update
processing of RPKI objects at that point, requiring new behavior from
parties implementing a new protocol would be possible.


I find your last sentence above confusing.  I would say that the 
BGPSEC protocol will have to define how it deals with alg changes for 
the signed objects it defines, with key changes for RPKI certs, with 
alg changes for all RPKI objects, and with format changes for RPKI 
objects and for its own objects.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-03 Thread Stephen Kent

At 12:02 PM -0400 4/25/11, Sam Hartman wrote:

...


However, when I look at section 2.1.4 in the signed-object document ,
the signer can only include one certificate.
How does that work during phase 2 when some of the RPs support the new
format and some only support the old format?
Your text above suggests that RPs grab the certificates from the RPKI
repository, but it seems at least for end entity certificates they are
included in the signed object.
What happens for end entity certificates during this form of upgrade?


Sam,

Yes, only one cert is associated with an RPKI signed object, and yes, 
this cert is embedded in the signed object format. So, when a new 
cert is issued, using a new format, the object itself is changed. 
Thus, the text describing Phase 2 is saying that there will be 
parallel instances of certs, CRLs, and signed objects in the RPKI 
repository system, associated with the old and new cert/CRL formats.
I could add a sentence or two making this explicit, and referring the 
reader to the  phased transition strategy used for algorithm 
transition in the RPKI, and described in 
draft-sidr-algorithm-agility. The reference would be informative, as 
this I-D is still in development and I don't want to hold up the 
progress of the rest of the SIDR docs.


Let me know if this addresses your question.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-03 Thread Stephen Kent

At 9:27 AM -0400 4/17/11, John C Klensin wrote:

Steve,
Two things:


(1) Given the variable amount of time it takes to get RFCs
issued/ published after IESG signoff, are you and the WG sure
that you want to tie the phases of the phase-in procedure to RFC
publication?


It probably would help if the IESG coordinated with the RFC Editor to 
try to avoid having any problems here. But, we anticipate that the 
durations for the phases will be long enough so that a few months in 
the RFC editor's queue can be managed.




(2) There is an incomplete sentence at the end of (2): This
allows CAs to issue certificates under (more context below).

   john


Whoops.  The final sentence should be:

This allows CAs to issue certificates under the new format before all 
relying parties are prepared to process that format.



Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-05-03 Thread Stephen Kent

At 11:05 AM -0400 5/3/11, Sam Hartman wrote:

Let me make sure I'm understanding what you're saying.  I can have
multiple ROAs for the same set of prefixes in the repository and valid
at the same time: one signed by a new certificate and one signed by a
previous certificate?  If so, I think I now begin to understand why the
SIDR working group believes this is a reasonable strategy.


yes, that is correct.  This is an essential part of the alg transition
mechanism.



I guess the only question I'd have remaining is whether ROAs or other
signed objects are intended to be used in other protocols besides simply
living in the SIDR repository?


The RPKI repository is designed to support a specific, narrow set of
apps. That's what the CP says, and we try to make these certs unattractive
for other apps, e.g., by use of the non-meaningful names.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-04-15 Thread Stephen Kent

Sam,

In response to your comments on the res-certs draft, re the 
restrictive nature of the relying party checks in certs, we have 
prepare the following text that will be included as a new section in 
the document.


Steve

-

Operational Considerations

This profile requires that relying parties reject certificates or 
CRLs that do not conform to the profile. (Through the remainder of 
this section the term certificate is used to refer to both 
certificates and CRLs.)
This includes certificates that contain extensions that are 
prohibited, but which are otherwise valid as per RFC 5280. This means 
that any change in the profile (e.g., extensions, permitted 
attributes or optional fields, or field encodings) for certificates 
used in the RPKI will not be backward compatible. In a general PKI 
context this constraint probably would cause serious problems. In the 
RPKI, several factors minimize the difficulty of effecting changes of 
this sort.


Note that the RPKI is unique in that every relying party (RP) 
requires access to every certificate and every CRL issued by the CAs 
in this system. An important update of the certificates and CRLs used 
in the RPKI must be supported by all CAs and RPs in the system, lest 
views of the RPKI data differ across RPs. Thus incremental changes 
require very careful coordination. It would not be appropriate to 
introduce a new extension, or authorize use of an extant, standard 
extension, for a security-relevant purpose on a piecemeal basis.


One might imagine that the critical flag in X.509 certificate and 
CRL extensions could be used to ameliorate this problem. However, 
this solution is not comprehensive, and does not address the problem 
of adding a new, security-critical extension. (This is because such 
an extension needs to be supported universally, by all CAs and RPs.) 
Also, while some standard extensions can be marked either critical or 
non-critical, at the discretion of the issuer, not all have this 
property, i.e., some standard extensions are always non-critical. 
Moreover, there is no notion of criticality for attributes within a 
name or optional fields within a field or an extension. Thus the 
critical flag is not a solution to this problem.


In typical PKI deployments there are few CAs and many RPs. However, 
in the RPKI, essentially every CA in the RPKI is also an RP. Thus the 
set of entities that will need to change in order to issue 
certificates under a new format is the same set of entities that will 
need to change to accept these new certificates. To the extent that 
this is literally true it says that CA/RP coordination for a change 
is tightly linked anyway. In reality there is an important exception 
to this general observation. Small ISPs and holders of 
provider-independent allocations are expected to use managed CA 
services, offered by RIRs/NIRs and by large ISPs. (All RIRs already 
offer managed CA services as part of their initial RPKI deployment.) 
This reduces the number of distinct CA implementations that are 
needed, and makes it easier to effect changes for certificate 
issuance. It seems very likely that these entities also will make use 
of RP software provided by their managed CA service provider, which 
reduces the number of distinct RP software implementations. Also note 
that many small ISPs (and holders of provider-independent 
allocations) employ default routes, and thus need not perform RP 
validation of RPKI data, eliminating these entities as RPs.


Widely available PKI RP software does not cache large numbers of 
certificates and CRLs, an essential strategy for the RPKI. It does 
not process manifest or ROA data structures, essential elements of 
the RPKI repository system. Experience shows that such software deals 
poorly with revocation status data. Thus extant RP software is not 
adequate for the RPKI, although some open source tools (e.g., OpenSSL 
and cryptlib) can be used as building blocks for an RPKI RP 
implementation. Thus it is anticipated that RPs will make use of 
software designed specifically for the RPKI environment, and 
available from a limited number of open sources. Several RIRs and two 
companies are providing such software today. Thus it is feasible to 
coordinate change to this software among the small number of 
developers/maintainers.


If the resource certificate format is changed in the future, e.g., by 
adding a new extension or changing the allowed set of name attributes 
or encoding of these attributes, the following procedure will be 
employed to effect deployment in the RPKI. The model is analogous to 
that described in [draft-ietf-sidr-algorithm-agility-00], but is 
simpler.


A new document will be issued as an update to this RFC.  The CP for 
the RPKI [ID-sidr-cp] will be updated to reference the new 
certificate profile. The new CP will define a new policy OID for 
certificates issued under the new certificate profile. The updated CP 
also will define a timeline for 

Re: Call for a Jasmine Revolution in the IETF: Privacy, Integrity,

2011-03-14 Thread Stephen Kent

At 6:03 PM +0100 3/11/11, Martin Rex wrote:

Phillip Hallam-Baker wrote:


 1) WPA/WPA2 is not an end to end protocol by any stretch of imagination.
It is link layer security.


It is a 100% end-to-end security protocol.


Because the IETF deals in Internet protocols (for the most part) e-t-e
secruity usually is interpreted to mean a protocol that can traverse routers.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir review of draft-ietf-sidr-res-certs

2011-03-14 Thread Stephen Kent

Jeff


Steve noted a desire to limit the liability of entities acting as CAs in
the RPKI.  I agree that goal is desirable, and restrictions on what
certificates issued by those CAs can contain help to do that (provided
the CAs actually comply).  However, requiring compliant RPs to treat all
extensions as critical does _not_ help, because an RP which incorrectly
accepts an over-broad RPKI certificate for some other purpose is
probably not an implementation of this profile and thus not bound by the
restriction.


My comments also noted that part of the strategy to limit the utility of
resource certs in other contexts is to restrict their content. In principle,
establishing constraints on what RPKI C As issue would do this, but 
experience suggests otherwise :-).  Thus, in order to provide 
immediate feedback to a CA that the certs it is issuing are 
non-compliant, we would like to have RPs reject the certs (when used 
in the intended context). Thus having RPs be very strict in what they 
accept is important as well.



Steve

P.S.

Sam noted that there are potentially lots of RPs.  In principle, 
there are just as many CAs, since every ISP is a CA as well as an RP. 
In reality we anticipate that many small ISPs will take advantage of 
managed CA services (the RIRs are already offering such services), so 
there should be many fewer distinct CAs vs. RPs.  Balancing that is 
the possibility that a number of ISPs, ones that rely on default 
routes, will also not be RPs. So, it's not clear whether we have more 
(distinct) CA or RPs.  I am hopeful that the RIRs will do a good job 
of generating compliant certs in their primary and managed service CA 
roles.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Secdir review of draft-ietf-sidr-res-certs

2011-03-14 Thread Stephen Kent

At 5:58 AM +0100 3/11/11, Martin Rex wrote:

Stephen Kent wrote:


 n to act as CAs , and we want to limit their liability.
 One way to do this is to restrict the fields and extensions in
 resource certs to make then not very useful for other applications.


A CA should never sign extensions that it doesn't understand.
Why has the RP to be bothered with this?


this is not about signing certs with extensions submitted by a 
Subject and which the CA does not understand.



A request everything must be considered critical, even if not marked
as such implies that every conceivable extension can only be a constraint.


I would prefer to say that the vast majority of extensions are 
excluded from this profile, to make it easier for RP software to 
process resource certs. The critical marking is not equivalent to 
what we have stated in this doc, although there are similarities. For 
example, there are a lot of standard extensions that 5280 requires RP 
software to recognize that are explicitly forbidden in the RPKI 
context.



With its original meaning, the ciriticality flag could be used to sort
extensions into constraints (critical) and capabilities (non-critical).


Note that not all extensions can be marked critical, so that using 
the critical flag would not be a solution in all cases anyway.



The problem with newly inventend constraints is that they require flag days.
capabilities do not suffer from this, and allow for a smoother migration.


This doc is a profile of 5280 and thus the imposition of constraints 
is to be expected. I do not know what you mean by the phrase ... 
capabilities do not suffer from this ...



  I also note that we want to impose these constraints on both CAs and
  RPs, because we have lots of examples of CAs issuing bad certs,

 accidentally. se want to use RP strictness to help motivate CAs to
 do the right thing :-).


I don't think that this idea will work.
The consumers of the technology will want things to interoperate,
not to fail.  And implementations will provide the necessary workarouds.


Interoperability is NOT enhanced by allowing certs with extensions 
that are extraneous to the focus of this architecture, and to the CP 
for this PKI. I suggest you read the architecture doc and the CP, 
both of which are available at the SIDR page to get a better sense of 
the context targeted by this profile.



Besides, such an idea is in conflict with rfc-2119 section 6

6. Guidance in the use of these Imperatives

   ... they must not be used to try to impose a particular method
   on implementors where the method is not required for interoperability.



I disagree with our reading of this text, relative to this context.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-14 Thread Stephen Kent

At 8:20 AM +0100 3/11/11, Nikos Mavrogiannopoulos wrote:

...
  What Peter probably meant to say was that IPsec chose to truncate the

 HMAC value to 96 bits because that preserved IPv4 and IPv6
 byte-alignment for the payload.  Also, as others have noted, the hash
 function used here is part of an HMAC calculation, and any collisions
 have to be real-time exploitable to be of use to an attacker.  Thus
 96 buts was viewed as sufficient.


This sounds pretty awkward decision because HMAC per record is full
(e.g. 160-bits on SHA-1), but the MAC on the handshake message
signature is truncated to 96-bits. Why wasn't the record MAC
truncated as well? In any case saving few bytes per handshake
is much less of value than saving few bytes per record. Was
there any other rationale for truncation?


I think you lost the context here.  I was explaining why IPsec chose 
to truncate the hash, not TLS.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-kanno-tls-camellia-00.txt (Additionx

2011-03-10 Thread Stephen Kent

At 5:08 PM -0800 3/8/11, Eric Rescorla wrote:
On Tue, Mar 8, 2011 at 3:55 PM, Peter Gutmann 
pgut...@cs.auckland.ac.nz wrote:


 Martin Rex m...@sap.com writes:


Truncating HMACs and PRFs may have become first popular in the IETF within
IPSEC.


 It wasn't any may have become first popular, there was only room 
for 96 bits

 of MAC data in the IP packet, so MD5 was truncated to that size.


This is an odd claim, since:

(a) RFC 1828 (http://tools.ietf.org/html/rfc1828) originally specified
not HMAC but a keyed MD5 variant
with a 128-bit output.
(b) The document that Martin points to has MACs  96 bits long.

Can you please point to where in IP there is a limit that requires a
MAC no greater than 96 bits.

-Ekr


What Peter probably meant to say was that IPsec chose to truncate the HMAC
value to 96 bits because that preserved IPv4 and IPv6 byte-alignment for
the payload.  Also, as others have noted, the hash function used here is
part of an HMAC calculation, and any collisions have to be real-time 
exploitable to be of use to an attacker.  Thus 96 buts was viewed as 
sufficient.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Secdir review of draft-ietf-sidr-res-certs

2011-03-10 Thread Stephen Kent

Sam,

The cert profile is intentionally very restrictive, as you noted.  A 
primary rationale is that we are asking folks who manage address (and 
AS#) allocation to act as CAs , and we want to limit their liability. 
One way to do this is to restrict the fields and extensions in 
resource certs to make then not very useful for other applications. 
we also wanted to make it as easy as possible for relying parties to 
process resource certs. X.509 has a lot of potentially complex 
features and RFC 5280 did not kill off as many as some of would 
have liked :-). So, profiling certs (and CRLs) for the RPKI makes 
sense. Allowing unknown, non-critical extensions would undermine this 
stragey.  Much as I liked Jon Postel, and worked with him on the IAB 
for a decade, his oft cited dictum is bad design advice in the 
security arena.


I also note that we want to impose these constraints on both CAs and 
RPs, because we have lots of examples of CAs issuing bad certs, 
accidentally. se want to use RP strictness to help motivate CAs to 
do the right thing :-).


I must admit that I found it a bit amusing that you chose to 
illustrate the potential for a change that would motivate being less 
stringent by citing RFC 3779, which you noted didn't have the problem 
in question :-). (Also note that integers usually are not very much 
constrained in ASN.1 in general, so the observation you made there 
seems a bit odd to me.)


Nonetheless, I get your point. My response is that IF we discover a 
need to change the profile, we could do so, e.g., by changing the 
cert policy (since the policy is specified in the CP, which 
references this version of the cert profile. Any change of this sort 
would have to be phased in over a long time scale.  I suggest you 
look at the algorithm agility I-D for RPKI to see the sort of 
planning envisioned for that sort of change.


I do agree that there is a typo in the doc, re the validity interval. 
Sean noted that after the doc was released, and we agreed that it can 
be fixed after last call.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: NAT behavior for IP ID field

2010-09-14 Thread Stephen Kent

...




 Curious; RFC2402 says
   Flags -- This field is excluded since an intermediate router might
  set the DF bit, even if the source did not select it.
 which is a licence to set the bit but I had not thought to reset the bit.
 RFC791,  RFC1122 and RFC1812 would appear to be silent on this.


I'm curious abut RFC 2402, then. Firstly, the host might not implement
PMTUD, and hence setting the DF bit on its behalf could possibly cause
interoperability problems. Secondly, some hosts clear the DF bit if the
advertised MTU in an ICMP frag needed is below some specified
threshols. This RFC2402-behavior could cause problems in this scenario, too.


We made the decision ton exclude the DF bit from the ICV computation 
in 2402, based on what we believed was happening in the net, 
irrespective of what should have been happening ;-). We retained this 
behavior in RFC 4202, for the same reason.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: secdir review of draft-ietf-csi-send-cert-03

2010-06-17 Thread Stephen Kent

At 1:47 AM -0400 6/2/10, Suresh Krishnan wrote:

...


Hmm. The ETA certificate itself does not need to have the RFC3779 
extension in it, but the relying party needs to fetch an RTA 
certificate which will contain a RFC3779 extension.


more precisely the ETA MUST NOT have such an extension.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: On the IAB technical advice on the RPKI

2010-03-18 Thread Stephen Kent

At 9:15 PM -0500 3/13/10, Phillip Hallam-Baker wrote:

So what has me annoyed about the IAB advice is that they gave advice
about a particular means where they should have instead specified a
requirement.


Phil,

I am not commenting on your proposal, but I do want to make a few 
observations that are relevant to this discussion.


I believe that the point the IAB was making is that if each RIR acts 
as a TA, any one of them could make an error (or suffer a compromise) 
that would allow for conflicting certs to be issued below the 
affected RIR. The certs used for the RPKI include RFC 3779 
extensions. If IANA acts as the only TA, then it will issue certs to 
the RIRs representing their allocations from IANA. Unless IANA makes 
an error in this cert issuance procedure (which should be aligned 
with its allocation of address space to the RIRs), then there can be 
no (undetected) conflicts among the RIRs re resource holdings. Also 
recall that IANA needs to act as a TA anyway, for unallocated, 
legacy, and reserved address space. So the choice the IAB was 
addressing was one TA vs. six.


You commented that using X.509 certs in this context requires 
completely new path validation semantics. The semantics are 
well-defined in RFC 3779, which was issued in June 2004. I also 
observe that OpenSSL already supports cert path validation using 3779 
extensions, and has done so for at least a couple of years. Note also 
that the RPs here are primarily ISPs. They will use software that 
yields outputs consistent with their goals of origin AS validation. 
Based on the SIDR WG activities, this means validating ROAs, using EE 
resource certs (which contain 3779 extensions). This is not a context 
where browsers or other commodity cert processing software will be 
used. I know of at least two independently-developed, open source 
implementations of RP software that deal validate ROAs using resource 
certs. There may be 1 or 2 more implementations in process.  Four of 
the five RIRs have working CA software that issues resource certs, 
and several of them have adopted the offline/online CA model to which 
you refer. So concerns about the difficulty of using X.509 certs here 
seem unfounded.


Given these observations, the public declaration last year by the NRO 
that all 5 RIRs will offer RPKI service as of 1/1/11, and the ongoing 
SIDR WG efforts, most of this discussion seems OBE at this stage.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: On the IAB technical advice on the RPKI

2010-03-18 Thread Stephen Kent

At 2:17 PM -0400 3/18/10, Phillip Hallam-Baker wrote:

Before declaring victory, lets see if anyone actually uses it to
validate any data.


fair enough.  anything else is speculation by both of us, so lets 
table the discussion for a year or so.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-15 Thread Stephen Kent

At 2:18 PM -0500 2/12/10, Edward Lewis wrote:

At 10:57 -0500 2/12/10, Stephen Kent wrote:


If we look at what the CP developed in the SIDR WG for the RPKI says, the
answer is the IESG (going forward, after an initial set of algs are adopted
based on the SIDR WG process). In the IPSEC, TLS, and SMIME contexts, the WGs
themselves have made the decisions, which the IESG then approves by virtue of
the usual standards track RFC approval process. I do not believe that the
criteria have been documented uniformly across these WGs.


What is CP?


Sorry for the acronym ambiguity. the CP is the certificate policy 
(for the RPKI).  Every major PKI has a CP. These documents follow the 
outline provided in RFC 3647.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-15 Thread Stephen Kent

At 8:50 AM -0800 2/12/10, David Conrad wrote:

On Feb 12, 2010, at 7:57 AM, Stephen Kent wrote:
 Who gets to decide on what algorithms get first class status and 
based on what criteria?
 If we look at what the CP developed in the SIDR WG for the RPKI 
says, the answer is the IESG


So, they're going to flip a coin or what?

Who is largely irrelevant.  The criteria is the interesting bit.


Both issues are relevant. Most of the other WGs dealing with this 
issue have been in  the secruity area and feel comfortable making 
these decisions. The IESG has been comfortable with their decisions. 
Note that change have been made, for other than technical reasons, 
e.g., initially TLS had DH 7 DSA as MUST and RSA as SHOULD, because 
of patent issues.  When the RSA patent expired, the roles were 
reversed. So the IESG has been an active participant in these 
decisions in the past.



  Steve brought up national algorithm, but we have also 
personal algorithms such as curve25519 or threefish.
 WGs like IPsec, TLS, and SMIME have been able to say no to 
personal algs for a long time.


IPsec, TLS, and SMIME are all one-to-one.  DNSSEC (in this context) 
is one-to-many.



Your observation is applicable to IPsec, not to S/MIME, and, for 
practical purposes, not for TLS.  An S/MIME message may be sent to 
multiple recipients, so it is not literally one-to-one. S/MIME 
accommodates algorithm diversity best for the public key algorithms 
used to encrypt the content encryption key.  It also can accommodate 
diversity for the algorithm used to sign the message, but at a higher 
cost. It does poorly if different recipients make use of different 
content encryption algorithms. TLS is nominally 1-1, but in reality, 
the vast majority of TLS use is for access to web sites that have a 
very diverse set of clients. That requires a small set of mandatory 
algorithms, to ensure interoperability.  Finally, the question  posed 
was about how have decisions on which algorithms are mandatory to 
implement have been decided in the IETF in the past. My reply 
addressed that question.




Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-ietf-dnsext-dnssec-gost

2010-02-12 Thread Stephen Kent

...
As a document shepeard I have made note that this is desired, but at
the same time this is a topic that was outside the scope of the working
group.
This is on the other hand a topic that belongs in the IETF review.

So my questions to the IETF (paraphrashing George Orwell)

Are all crypto algorithms equal, but some are more equal than others?


not all are equal, from a purely cryptanalytic perspective. Among those that
may be equivalent from that perspective, there are other meaningful 
differences, e.g., how widely are the algs implemented and used.


Who gets to decide on what algorithms get first class status and 
based on what criteria?


If we look at what the CP developed in the SIDR WG for the RPKI says, 
the answer is the IESG (going forward, after an initial set of algs 
are adopted based on the SIDR WG process). In the IPSEC, TLS, and 
SMIME contexts, the WGs themselves have made the decisions, which the 
IESG then approves by virtue of the usual standards track RFC 
approval process. I do not believe that the criteria have been 
documented uniformly across these WGs.


Steve brought up national algorithm, but we have also personal 
algorithms such as curve25519 or threefish.


WGs like IPsec, TLS, and SMIME have been able to say no to personal 
algs for a long time.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


draft-ietf-dnsext-dnssec-gost

2010-02-11 Thread Stephen Kent
I recommend that the document not be approved by the IESG in its 
current form.  Section 6.1 states:



6.1.  Support for GOST signatures

   DNSSEC aware implementations SHOULD be able to support RRSIG and
   DNSKEY resource records created with the GOST algorithms as
   defined in this document.


There has been considerable discussion on the security area 
directorate list about this aspect of the document. All of the SECDIR 
members who participated in the discussion argued that the text in 
6.1 needs to be changed to MAY from SHOULD. The general principle 
cited in the discussion has been that national crypto algorithms 
like GOST ought not be cited as MUST or SHOULD in standards like 
DNESEC. I refer interested individuals to the SECDIR archive for 
details of the discussion.


(http://www.ietf.org/mail-archive/web/secdir/current/maillist.html)

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-housley-iesg-rfc3932bis-10.txt

2009-10-14 Thread Stephen Kent

At 1:11 AM -0700 10/13/09, SM wrote:

Hi Steve,
At 12:18 12-10-2009, Stephen Kent wrote:
When the site closed, do you believe that all of the material 
published there will become inaccessible, not archived anywhere?  I 
doubt that.


I am not sure whether all the material will be available at 
archive.org or other archiving sites.  If the material is archived 
on one site only, there's a risk of too big to fail.  I can change 
the material I publish.  That's not always good if the material is 
to be used as a reference (immutability).  It took me some time to 
understand that sometimes we need access to an old version of a 
specification, and not the latest one, even if that version contains 
mistakes.  T


I agree with your observations, but I don't think that the RFC series 
is the only way to achieve the characteristics you cite.



hat's part of the intrinsic qualities I mentioned in my earlier message.

The status quo does not mandate that the RFC Editor and the IESG 
agree; it allows the RFC Editor to make a unilateral decision to 
ignore an IESG note. So, I don't agree with the second part of your 
statement above. I do agree that the change diminishes the 
independence of the RFC Editor.


You are trying to persuade me to change my stance while I am trying 
to persuade you to change yours.  It is in essence a dialogue.  If 
one of us is the authority which makes the decision, that person can 
make an unilateral decision and ignore the other person's opinion. 
By invoking that authority, the person causes a break down of the 
dialogue.  When two parties are bound to work together on a regular 
basis, that can result in an uncomfortable situation.  Now, if we 
have to add an appeal (it's not being made in an individual 
capacity) to that, we can end up with a larger issue instead of a 
difference of perspectives between an individual and a body.


Good points. My view is that a shift in the balance of power is 
appropriate, and that an appeal process can increase confidence that 
the IESG will not abuse its power (if granted). But, that is not a 
guarantee. Not sure about your last sentence above, since the RFC 
Editor is no longer an individual, but a function effected by a set 
of individuals. At least the IESG, another set of individuals,  least 
appointed through in open process, unlike the RFC Editor.


Let's step away from the draft being discussed for a few minutes and 
ponder on whether either of us is being unreasonable.  Now, if we 
cannot figure out the answer, let's ask (figuratively) someone else 
for advice.  We have the choice of accepting the advice even though 
we are right or seeking an advice that will suite us.


Of course you and I are reasonable people :-). Not sure how the rest 
of the paragraph above fits into our discussion though.



Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-housley-iesg-rfc3932bis-10.txt

2009-10-12 Thread Stephen Kent

At 10:29 AM -0700 10/9/09, SM wrote:

...
Section 1.1 of the draft mentions that:

  The IESG may provide an IESG note to an Independent Submission or
   IRTF Stream document to explain the specific relationship, if any, to
   IETF work.

That's a may.  From what you said, I deduce that you would prefer 
that line to say:


  The IESG will provide an IESG note to an Independent Submission ...

The reasons for the IESG Note are mentioned in Section 3.  None of 
them are about a label saying that the RFC is not a product of a WG.


I think may is the right term here. But, if the IESG chooses to 
assert this prerogative, I would like to see the note inserted, 
without the need for RFC Editor concurrence.


When the RFC series was first established, the need for archival, 
searchable, open publication of Internet-related documents was a 
good argument for the autonomy of the RFC Editor function. 
Moreover, the RFC Editor function pre-dates the existence of the 
IETF and the IESG, by many years. But, times change. The 
availability of search engines like Google make it possible for 
essentially anyone to publish material that is widely accessible, 
relatively easy to find, and more or less archival. Also, the vast 
majority of the RFCs published for many years are documents 
approved by the IESG. Thus it seems reasonable to revisit the 
degree of autonomy the RFC Editor enjoys relative to the IESG. The 
current proposal does not change the relationship very much in 
practice, but I understand that it is an important issue in 
principle, and the IETF membership has debated it in this context, 
extensively.


An open source advocate once suggested to me that I could use 
Geocities to publish material.  That site is closing this month. 
There are differences between publishing something on your web site 
and publishing a RFC.  The latter does not require search engine 
optimization for wide dissemination.  A RFC has intrinsic qualities 
because of the way it is produced.  There are some RFCs with IESG 
notes, such as RFC 4144, which I read as good advice.


When the site closed, do you believe that all of the material 
published there will become inaccessible, not archived anywhere?  I 
doubt that.


The current proposal undermines the independence of the RFC Editor 
(ISE in practice).  It changes the relationship from one where the 
various parties should work together and come to an agreement to a 
tussle between the RFC Editor and the IESG.  I don't think that an 
appeal is a good idea.  I didn't object to it as the IESG folks may 
feel better if they had that mechanism.  However, I do object to 
making the outcome mandatory.


The status quo does not mandate that the RFC Editor and the IESG 
agree; it allows the RFC Editor to make a unilateral decision to 
ignore an IESG note. So, I don't agree with the second part of your 
statement above. I do agree that the change diminishes the 
independence of the RFC Editor.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: I-D ACTION:draft-housley-iesg-rfc3932bis-10.txt

2009-10-09 Thread Stephen Kent

Dave,

I'd like to address some of the points you made in your message to 
Russ re 3932bis:



...

The first assumption is that there is a real problem to solve here.  Given 40
years of RFC publication, without any mandate that the RFC Editor 
must include a
note from the IESG, and without any critical problems resulting, we 
have yet to

see any real case made for changing the current rule, which makes inclusion of
an IESG note optional.


As ads for financial products remind us Past performance is not a 
guarantee of future performance. Since we have been making changes 
in IETF functions over time, including the RFC Editor function, I 
don't think it is unreasonable to formalize this aspect of the 
relationship between the IESG and the RFC Editor, before a problem 
arises.



The second assumption is that an IESG note is sometimes, somehow essential.  I
believe you will be very hard-pressed to make an empirical case for that
assumption, and the problem with trying to make only a logical case is that a
competing logical case is always available.  In other words, in 40 years of
RFCs, the damage that was caused by not having a note added by an 
authority that
is separate from the author, or the damage that was prevented by 
adding one, is
almost certainly absent.  That does not make it a bad thing to add 
notes, but it

makes the case for /mandating/ such notes pretty flimsy.


I believe that most folks recognize that the public, in general, does 
not distinguish between RFCs that are the product of IETF WGs, 
individual submissions, independent submissions, etc. I think the 
IESG has a legitimate role in ensuring that RFCs that are not the 
product of WGs are appropriate labelled, and inclusion of an IESG 
note is a reasonable way to do that.


When the RFC series was first established, the need for archival, 
searchable, open publication of Internet-related documents was a good 
argument for the autonomy of the RFC Editor function. Moreover, the 
RFC Editor function pre-dates the existence of the IETF and the IESG, 
by many years. But, times change. The availability of search engines 
like Google make it possible for essentially anyone to publish 
material that is widely accessible, relatively easy to find, and more 
or less archival. Also, the vast majority of the RFCs published for 
many years are documents approved by the IESG. Thus it seems 
reasonable to revisit the degree of autonomy the RFC Editor enjoys 
relative to the IESG. The current proposal does not change the 
relationship very much in practice, but I understand that it is an 
important issue in principle, and the IETF membership has debated it 
in this context, extensively.



The third assumption is that the real locus of control, here, needs to be the
IESG.  Even though you are now promoting an appeal path, it's a fallback
position that derives from the assumption that the IESG should be the ultimate
arbiter of all quality assessment, not just for IETF RFCs but for 
all RFCs.  For

independent submissions, this distorts the original model, which is that the
IESG is merely to be consulted for conflicting efforts, not general-purpose
commentary on the efficacy or dangers of an independent document. 
Really, Russ,
it's OK for some things to proceed without having the IESG act as a 
gatekeeper.


My comment above addresses this issue as well.


The fourth assumption is that an added layer of mechanism, in the form of an
appeal path, is worth the cost.  An honest consideration of those 
costs has been

absent, yet they are significant.


I think the biggest cost of an appeal mechanism is incurred when 
appeals arise, although there are costs associated with defining the 
mechanism. Since you argued above that we ought not expend a lot of 
effort to deal with problems that have not arisen, maybe we ought not 
worry about the costs of appeals that have not yet arisen :-).


The fifth assumption is the odd view that Jari put forward, namely 
that creating

an appeal path somehow retains the independence of the editor.  In other
words, impose a mechanism designed to reverse decisions by the editor, but say
that the editor retains independence.  Confusing, no?


I agree that the quote cited above is not a good way to characterize 
the value of an appeals process. Perhaps a better way to state the 
value of the appeal process is to say that it provides a way for the 
Editor to address a situation when it believe that the IESG has 
insisted on inserting an inappropriate note.


I support 3932bis, with the appeal provision.

Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-iesg-rfc3932bis and the optional/mandatory nature of IESG notes

2009-08-31 Thread Stephen Kent

Joel,

I agree that IESG notes should be rare, but primarily because 
independent stream submissions should be rare :-).


Long ago, when I served on the IAB, we grappled with this problem, 
and failed to find a good solution. Despite what we say about RFC 
status and origin markings, the public sees RFCs as carrying the 
imprimatur of the IETF (not just that of the RFC Editor). When Jon 
Postel was the RFC editor, we were pretty comfortable with his 
judgement on these matters, this was less of an issue. However, time 
have changed and I would be happy to see inclusion of an IESG note be 
mandatory, contrary to historical practice.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-ietf-tls-extractor (Keying Material Exporters for Transport Layer Security (TLS)) to Proposed Standard

2009-08-05 Thread Stephen Kent

At 10:05 AM -0400 8/5/09, Dean Anderson wrote:

Ned and Stephen,

If you mean the recent message traffic about the 'intention to remove'
extractor from a list of patented documents, that hasn't happened so far
and an 'intention to remove' doesn't mean it isn't patented. It is
possible that Certicom can later say it was a misunderstanding and that
the official documents were correct. As evidence of their view, they
have the official documents.  As evidence of your view, you have an
unofficial and vague email message apparently contrary to the official
documents, and you will have to argue you based your decision on the
unofficial and vague message rather than the official IPR Disclosures. 
I think one cannot get to a Qualcomm v. Broadcomm finding under such

circumstances.


I reiterate my support for the cited I-D. period.

Steve

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-ietf-tls-extractor (Keying Material Exporters for Transport Layer Security (TLS)) to Proposed Standard

2009-07-30 Thread Stephen Kent
I too support publication of this document as a Standards Track RFC, 
in light of the salient message traffic of late.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Let's move on - Let's DNSCurve Re: DNSSEC is NOT secure end to end

2009-06-12 Thread Stephen Kent

At 10:32 PM -0400 6/11/09, David Conrad wrote:

Hi,

On Jun 11, 2009, at 8:35 PM, Stephen Kent wrote:

But, in a DNSSEC environment, IANA performs two roles:
- it coordinates the info from the gTLDs and ccTLDs and constructs
  the authoritative root zone file
- it signs the records of that file


Nope.  Just to clarify things:

IANA (well, ICANN as the IANA functions operator) receives and 
validates root zone changes.


VeriSign constructs and publishes the root zone to the root server operators.

In the context of DNSSEC, as documented at 
http://www.icann.org/en/announcements/announcement-2-03jun09-en.htm, 
VeriSign will have operational responsibility for the zone signing 
key and ICANN will manage the key signing process.


David,

Thanks for the clarification.  I just wanted to emphasize the two 
distinct functions that IANA performs in the DNSSEC context, without 
getting into the ZSK/KSK details and the current proposed split of 
responsibility between IANA and VeriSign (which is outside the IETF 
DNSSEC architecture, right?).


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Let's move on - Let's DNSCurve Re: DNSSEC is NOT secure end to end

2009-06-11 Thread Stephen Kent

At 10:41 AM +1000 6/11/09, Mark Andrews wrote:

In message p06240803c65430cf6...@[10.10.10.117], Stephen Kent writes:

 Joe,

 You have argued that DNSSEC is not viable because it requires that
 everyone adopt IANA as the common root.


Which isn't even a requirement.  Alternate root providers just need
to get copy of the root zone with DS records and sign it with their
own DNSKEY records for the root.

ISP's that choose to use alternate roots might get complaints however
from their customers if they are validating the answers using the
trust-anchors provided by IANA.  This however should be seen as a
good thing as the ISP can no longer tamper with the DNS without
being detected.  If a ISP can convince all their customers that the
alternate roots are a good thing then this won't become a issue.


Fair point, although I think we all want to avoid the sort of 
Balkionization that this suggests.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Publicizing IETF nominee lists [Fwd: Last Call: draft-dawkins-nomcom-openlist (Nominating Committee Process: Open Disclosure of Willing Nominees) to BCP]

2009-06-11 Thread Stephen Kent

At 10:51 AM +0200 6/11/09, Lars Eggert wrote:
Content-Type: multipart/signed; boundary=Apple-Mail-5-115115602; 
micalg=sha1; protocol=application/pkcs7-signature


I agree with Sam and Jari. This is a good and overdue change.

Lars



I also agree with this proposal, based on several experiences serving 
on NOMCOM.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Publicizing IETF nominee lists [Fwd: Last Call: draft-dawkins-nomcom-openlist (Nominating Committee Process: Open Disclosure of Willing Nominees) to BCP]

2009-06-11 Thread Stephen Kent

Joe,

Having served on NOMCOM more than once, and having been solicited for 
inputs every year, I much prefer publishing the names of folks have 
consented to be considered for IAB and IESG positions.  The addition 
of ringers to lists that are sent out (to hide the identities of 
the true candidates) wastes the time of a lot of folks who are asked 
to provide feedback on these non-candidates. It also means that 
someone who is a real candidate may not receive feedback because 
people assume the individual is a ringer!


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Let's move on - Let's DNSCurve Re: DNSSEC is NOT secure end to end

2009-06-11 Thread Stephen Kent

Phil,

The examples you give about backed-in trust anchors are valid, but 
they point to decisions by vendors to simplify their designs at the 
cost of secruity and functionality. I've been told that it is very 
hard, if not impossible, to permanently remove some vendor-supplied 
TAs in a popular browser.  These are not fundamental results of 
architectural decisions of the sort the IETF makes, but vendor 
choices that lead to possible problems for user.


I think I understand the multi-party, RP-centric threshold approach 
to managing the DNSSEC root that you outlined. But, in a DNSSEC 
environment, IANA performs two roles:

- it coordinates the info from the gTLDs and ccTLDs and constructs
  the authoritative root zone file
- it signs the records of that file

Any scheme that allows multiple entities to confirm the content of 
the root zone file also has to include a means for these entities to 
independently acquire and verify the master file data and to create a 
separate, distinct master file if they disagree.  This is a lot more 
complex that what you outlined in your message (from an from an 
administrative vs. crypto perspective). It also raises questions 
about how complex RP software has to be in dealing with multiple sets 
of quasi-authoritative root authorities.  All experience to date 
suggests that RPs fare poorly when trying to deal with much less 
complex trust anchor situations in other PKI environments today.


It is conceivable (under the new administration) that ICANN will stop 
being under the control of the U.S DoC, but it also is not clear that 
such a change would ameliorate the concerns of all governments with 
regard to this issue. I think the set of folks who feel a need to use 
other than the current, proposed model (IANA as the DNSSEC root) are 
a very small minority of the set of public Internet users and thus 
they should bear the burden of dealing with the resulting costs and 
complexity for managing alternative root schemes. I don't think that 
such costs and complexity should be borne by the vast majority of 
Internet users. Its also not clear how long one might spend debating 
the question of whether any single scheme would satisfy all of the 
players who are not comfortable with the current model.


Steve___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Let's move on - Let's DNSCurve Re: DNSSEC is NOT secure end to end

2009-06-10 Thread Stephen Kent

Joe,

You have argued that DNSSEC is not viable because it requires that 
everyone adopt IANA as the common root.  I agree that under the 
current IANA management situation many folks may be uncomfortable 
with IANA as the root.  However, in practice, the world has lived 
with IANA as the root for the non-secure version of DNS for a long 
time, so it's not clear that a singly-rooted DNSSEC is not viable 
based on this one concern.  Moreover, DNSSEC is a form of PKI, an din 
ANY PKI, it is up to the relying parties to select the trust anchors 
they recognize.  In a hierarchic system like DNS, the easiest 
approach is to adopt a single TA, the DNS root. But, it is still 
possible for a relying party to do more work and select multiple 
points as TAs. I would expect military organizations in various parts 
of the world to adopt a locally-managed TA store model for DNSSEC, to 
address this concern. However the vast majority of Internet users 
probably are best served by the single TA model.


As for DNSCurve, I agree with the comments that several others have 
made, i.e., it doe snot provide the fundamental security one wants in 
DNS, i.e., an ability to verify the integrity and authenticity of 
records as attested to by authoritative domains, din the face of 
caching.



Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Comments requested on recent appeal to the IESG

2009-02-21 Thread Stephen Kent

At 7:06 PM -0800 2/20/09, Dave CROCKER wrote:

Stephen Kent wrote:

At 9:00 PM -0800 2/19/09, Hallam-Baker, Phillip wrote:

Just as a matter of observation, ...

...

   I have not read the doc in
question,...



Hey guys.  As someone who is frequently faced with trying to parse 
out what are and are not the commonly held views among the security 
community, I'm actually interested in this type of exchange.


But as you both note, this exchange isn't critical to resolution of 
the the appeal.


For those of who want to see this appeal dispatched as quickly and 
as painlessly as possible, is there a chance that you can continue 
the exchange under a different guise, at a minimum under an entirely 
independent thread?


d/


Dave,

My belief is that IF the doc conflates authentication and 
authorization, then some intelligent editing probably can fix that 
problem quickly.  Since, as you and other have noted, the WG is n 
board with this doc, the issue Phil raised, and to which I responded, 
ought not affect approval of the document.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Comments requested on recent appeal to the IESG

2009-02-20 Thread Stephen Kent

At 9:00 PM -0800 2/19/09, Hallam-Baker, Phillip wrote:

Content-class: urn:content-classes:message
Content-Type: multipart/alternative;
boundary=_=_NextPart_001_01C99318.3582B8D8

Just as a matter of observation, there is not and never has been a 
security requirement to rigidly separate authentication and 
authorization. Indeed there is no real world deployment in which 
authentication and authorization are not conflated to some degree.


Authentication and authorization (aka access control) are distinct 
security services. Too often they are confused, and the result of 
such confusion is never a great outcome.  I have not read the doc in 
question, but your dismissal of this issue is not persuasive.


 The separation of authentication and authorization is a matter of 
administrative and operational convenience.


nonsense. the two are implemented via a wide range of mechanisms, 
many of which are independent of one another. I may use passwords or 
challenge-response mechanisms or PKI technology for authentication, 
and use various identity-based authorization mechanisms with any of 
these means of verifying an identity. Thus there are good technical 
and design reasons to consider the services and mechanisms 
separately, as part of a modular design approach.


It is very rarely the case that every privilege that might 
potentially be granted to a user is known in advance. Hence the 
benefit of maintaining a distinction. But in practice the fact that 
a party holds a valid authentication credential is in itself often 
(but not always) sufficient to make an authorization decision in 
low-risk situations.


True, but the fact that you had to employ several qualifiers in these 
sentences to make them true illustrates the benefits of 
distinguishing between the terms.


 Thus an objection based on the mere risk that such a conflation may 
occur is not justified as such conflation has occured in every 
practical security system ever.


I  don't know if the objection is an important one here, but I do 
think it is important in general.


We do not issue employee authentication badges to non-employees. 
Thus an employee-authentication badge will inevitably carry de-facto 
authorization for any action that is permitted to every employee 
(like open the office door).


A good example to make my point.  It is typically the case that if 
all employees have ID badges, these badges grant access to most 
buildings/rooms of the employer's facilities. But many other rooms of 
an employer's facilities typically are off limits to all but a few 
employees.


The Authorization/Authentication model is in fact broken, in a 
modern system such as SAML you actually have three classes of data 
with the introduction of attributes.


SAML allows one to make security assertions of all sorts. The fact 
that one can make both authentication and authorization assertions 
using the same construct is distinct from the question of whether 
conflating the two terms is a good idea in general, as you seem to be 
arguing.


Steve___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: how to contact the IETF

2009-02-09 Thread Stephen Kent

Alex,

The conclusion I draw from this experience differs from yours. If the 
individuals who sent the messages in question choose to become 
involved constructively, then there can be some benefit. But, the act 
of sending the messages in question has generated ill will, so it was 
a bad way to begin a constructive, contributory process.


Steve
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: problem dealing w/ ietf.org mail servers

2008-07-04 Thread kent
I think I could have been clearer with my message.  It wasn't intended as
either a criticism of the ietf list management (in fact, I use precisely the
same anti-spam technique) or a request for help with configuration of my
mailservers (I may not be the sharpest knife in the drawer, but usually I can
figure these things out on my own).

Instead, I was presenting what I thought was an interesting example of a 
subtle problem that can come up in ipv6 deployment.  

The mailserver in question uses a default redhat enterprise build (actually
centos).  ipv6 is either enabled by default, or just has a single check box,
with no further information.  The fact that ipv6 is enabled so trivially
carries the implication that just enabling ipv6 won't actually damage
anything. 

Now I know different.  Just enabling ipv6 on an otherwise correctly
configured and functioning ipv4 box *will* cause damage -- it will cause mail
that would have been delivered to not be delivered.  I could be wrong, but
this strikes me as a trap that lots of people could fall into. 

As I mentioned, my servers actually do reject mail if they can't find a 
reverse dns for the senders IP.  Some of those servers use ipv6; in light of 
all 
this I'm going to have to rethink that decision.  For a server, the 
combination of enabling ipv6 and using this particular anti-spam technique 
may drastically increase the number of false positives -- especially as ipv6 
gets more widely deployed.

Best Regards
Kent

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: problem dealing w/ ietf.org mail servers

2008-07-04 Thread kent
On Fri, Jul 04, 2008 at 10:53:41AM -0400, Keith Moore wrote:
 Now I know different.  Just enabling ipv6 on an otherwise correctly
 configured and functioning ipv4 box *will* cause damage -- it will cause 
 mail
 that would have been delivered to not be delivered.  I could be wrong, but
 this strikes me as a trap that lots of people could fall into. 
 
 that's one way to look at it.  another way to look at it is that poorly 
 chosen spam filtering criteria *will* cause damage, because conditions 
 in the Internet change over time.

Can't disagree with that :-) 

In fact, I've never been very happy with this particular technique for
dealing with spam.  Reverse dns is not required for standards-compliant
delivery of mail, and it is my personal opinion that the ietf in particular
should not be using it as an absolute filtering criteria.  [Also, in my 
experience it hasn't been particularly effective.]

 of course, IPv6 will often get blamed for the problem because it's 
 something that the sender can control, whereas the spam filters are not 
 accountable to anyone.

That's a bit of an overstatement -- very frequently spam filters are
accountable to the people receiving the email, and in my experience, most 
people would rather deal with some spam than lose important email.

Kent
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


problem dealing w/ ietf.org mail servers

2008-07-02 Thread 'kent'
Hi Rich

I'll cc this to the ietf list, as you suggested.

I've found the problem.  It may or may not be something that ietf want's to
do something about -- I would think they would, since it seems to have global
significance.  But I can fix it from this end. 

Specifically, the problem Dave encountered earlier was that the ietf mail
server was rejecting mail without reverse dns, and since the ietf mail server
and the mipassoc.org/dkim.org/bbiw.net mail servers all had ip6 addresses,
and ip6 is used preferentially, and I hadn't set up reverse dns, they were
dropping all mail.  I fixed that, and things started working. 

The only domains I control that had explicit ipv6 addresses were Dave's
domains.  For example, graybeards.net:

# host graybeards.net
graybeards.net has address 72.52.113.69
graybeards.net has IPv6 address 2001:470:1:76:0::4834:7145
graybeards.net mail is handled by 10 mail.graybeards.net.
# host mail.graybeards.net
mail.graybeards.net has address 72.52.113.69
mail.graybeards.net has IPv6 address 2001:470:1:76:0::4834:7145
# host 2001:470:1:76:0::4834:7145
5.4.1.7.4.3.8.4.f.f.f.f.0.0.0.0.6.7.0.0.1.0.0.0.0.7.4.0.1.0.0.2.ip6.arpa 
domain name pointer mail.graybeards.net.
#

Mail now works for this domain.

But, it turns out, the ietf.org mail servers are rejecting mail from other
domains as well.  Here's a log entry for one of your messages:

Jul  2 13:10:23 mail sendmail[31264]: STARTTLS=client, relay=mail.ietf.org., 
version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256
Jul  2 13:10:29 mail sendmail[31264]: m62Hvfbm011799: to=[EMAIL PROTECTED], 
ctladdr=[EMAIL PROTECTED] (1023/1023), delay=02:12:32, xdelay=00:00:28, 
mailer=esmtp, pri=662167, relay=mail.ietf.org. [IPv6:2001:1890:1112:1::20], 
dsn=4.7.1, 
stat=Deferred: 450 4.7.1 Client host rejected: cannot find your reverse 
hostname, [2001:470:1:76:2c0:9fff:fe3e:4009]

Rejecting when you can't find a reverse is, of course, a common anti-spam 
technique. 

However, this last address, 2001:470:1:76:2c0:9fff:fe3e:4009, is not
explicitly configured on the sending server; instead, it is being implicitly
configured through ip6 autoconf stuff:

eth0  Link encap:Ethernet  HWaddr 00:C0:9F:3E:40:09  
  inet addr:72.52.113.176  Bcast:72.52.113.255  Mask:255.255.255.0
  inet6 addr: fe80::2c0:9fff:fe3e:4009/64 Scope:Link
  inet6 addr: 2001:470:1:76:2c0:9fff:fe3e:4009/64 Scope:Global

The 2 ip6 addresses, the link-local address, and the global address, are
generated from the mac address (you can see the 0x4009 at the end) and
configured autmomatically, merely because ipv6 is enabled on this box by
default, and a global prefix is available.

That is to say, it appears the ietf.org mail server is probably now rejecting
mail from *any* box that is getting a default global ipv6 address, since
those addresses will most likely not be in ip6.arpa.  There may be a whole
lot of boxes in this situation. 

Kent

PS -- I'm not sure this will actually make it to the ietf list :-) ...
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: RNET: Random Network Endpoint Technology

2008-06-23 Thread Stephen Kent

Chad,

Your message of 4/8 ended  with a list of changes needed to IPv6 
implementations to implement RNET. Changes to processing logic are 
just as serious as change to the format.


Steve
---

The following changes need be made to the IP Version 6 Protocol Logic, in
routers, in order to impliment this technology:

   1) encryption routines
   2) recognization of RNET Route Requests
   3) generation and recognization of RNET errors
   4) routing table modifications
   NB:  the RNET Host address may be stored in the host address 
of the route

  entry.  The Target Host address may be stored in the Netmask of
the
  route entry.  The Gateway address may be stored in the gateway of
  route entry.  The Route Decay may be stored in the Metric or be
  implimented through some system timer.
   5) routines to acquire keys from the RNET Centralized Server
   6) storage of the IP address of the RNET Centralized Server
   7) storage of the router's unique key
   8) storage of the RNET Global Key
   9) an additional flag for route entries marking them as RNET Route entries

 The following changes need be made to the IP Version 6 Protocol Logic, in
hosts, in order to impliment this technology:

   1) encryption routines
   2) generation of RNET Route requests
   3) recognization of RNET errors
   4) routines to acquire keys from the RNET Centralized Server
   5) storage of the IP address of the RNET Centralized Server
   6) storage of the host's unique key
   7) storage of the RNET Global Key
   8) an additional flag in the IP Stack to identify the assigned host address
   as being an RNET Host address so that the IP Stack is aware of the
   protocol to follow in initiating connections.  This allows
differentiation from
   RNET Host addressess and regular IPs___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Thoughts on the nomcom process

2008-03-17 Thread Stephen Kent

Mike,

I have to disagree with your characterization of the proper role of 
the IAB with regard to the NOMCOM process.


I have been on three NOMCOMs, including the one prior to this, so I 
too have some experience in the process.


My feeling is that the IAB may have been trying to assert too much 
authority in the process recently.  That certainly was my perception 
with the previous NOMCOM, and the report about this year's activities 
suggests that the problem continues.


RFC 3777 is ambiguous wrt some details of the process, especially the 
IAB's purview re confirming IESG candidates selected by the NOMCOM. 
One could read 3777 so as to allow the IAB to effectively supplant 
the NOMCOM, e.g., by refusing to approve a slate of candidates until 
one candidate, acceptable to the IAB, is named for a given position. 
This is clearly not what 3777 intends, as undermines the NOMCOM role, 
yet I have seen behavior that comes close to this.


I think the preferred way forward is to clarify 3777 to make clear 
that the IAB must not engage in actions that are tantamount to 
usurping the rile of the NOMCOM.


As for the IAB publication you cited in your message, I don't recall 
seeing the RFC number for it.  If it was just an informational RFC, I 
don't believe that has standing relative to 3777, which is the result 
of the usual IETF process.  The IAB is not empowered to write an 
interpretation of an RFC to expand the IAB's role unilaterally.


Steve___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-net-utf8 (Unicode Format for Network Interchange) to Proposed Standard

2008-01-14 Thread Karlsson, Kent
John C Klensin wrote:

--Frank Ellermann wrote:
 ...
  Hopefully somebody can confirm that IND is correct, or not.
  For HT and FF I hope the final version will somehow express
  that both are not really bad, and as far as they're bad FF is
  worse than HT. 

See http://www.itscj.ipsj.or.jp/ISO-IR/077.pdf, which, somewhat
to my surprise, says that IND is an LF clone. However, IND has
long been deprecated, and never got any noticable use, and is even
REMOVED from ECMA 48. So I think it is safe to ignore IND. Indeed,
I would prefer it not be mentioned in the document we're discussing.

(I would like to say the same about NEL, but NEL is alive
and the native line separator/terminator in EBCDIC bases systems,
and may escape as NEL rather than be converted to something else.)

 I'm open to consensus about changes for either HT or FF, but the
 theory of bad that was used to construct the spec was:
 
 (i) If a spacing control has the effect of setting the
 position of the next character, it is bad unless that position
 is unambiguous.   In addition, things are bad unless they are
 necessary in running text (as distinct from faking things that
 are better handled in markup, followed by either device-specific
 output or standard page representations, neither of which are
 normal text).

There is also another issue. If HT is converted to (presumably)
a sequence of SP, you will mess up bidi text. (See one of the
other mails I send at about the same time as this one.)

 It is unambiguous for SP.  It is unambiguous for CRLF.
 Independent of the what is a line-end problem, it is somewhat
 ambiguous for CR or LF alone and for IND.  It is ambiguous for

Even though IND was, for some strange reason, defined as an LF
clone, it has long been deprecated, and AFAIK never saw any
popular use. I think it is best left forgotten and left in silence.
Note also that it is not only deprecated, but even REMOVED from
ISO/IEC 6429 (ECMA 48).

 HT.  It would be ambiguous for FF except that FF is assigned
 fairly clear semantics in NVT -- FF is not a line ending

Of course it is line ending. So is raw LF. That the new line
(under some circumstances) may be strangely indented is irrelevant.

 (CRLF FF is needed)

That is a combination I haven't heard of before and I DON'T
think it should be regarded as one NLF. There are TWO NLFs there,
CRLF and then FF.

 and as Bob Braden noted, there is a fairly clear
 rule that FF is to be interpreted as top of next page if one

Sure. But the line before it is also ended (no matter where the
top of next page line begins).

 knows what a page is and as blank line otherwise.  But that
 rule is sufficiently often ignored to call for considerable
 caution about FF, and the text now contains a cautionary note
 for that reason.

I agree that there should be caution, but not in the shape and
form it has in the draft we are discussing.

 There is an interesting demonstration of the law of unintended
 consequences here.  If we could tell that a string was
 unambiguously UTF-8 (or whatever) by looking at it, even if it
 contains nothing but ASCII characters, then there would be no
 reason to try to make net-utf8 a proper superset of NVT.  If we

I don't see why you really need to carry on the (unworkable in
a more general setting than ASCII, in particular it is unworkable
for the UCS) idea of using carriage return and BS for strange
overstriking. Even for ASCII, the ONLY aspect of that that worked
moderately well was using BS, _ (or similar) for underlining.
But note also that underlining can be achieved also in the UCS
(without using kludges line BS, _ for that) without the use
of a higher level protocol by instead using U+0332, COMBINING LOW
LINE. Though using a higher level protocol for getting underlining
is preferable (consider searching),  COMBINING LOW LINE would
still be much preferable over BS, _ (or similar).

 could do that, we could also do away with the entire next line
 debate by prohibiting even CRLF and requiring the use of LS

LS would be a bad idea. See my other email (sent at approx. the
same time as this one). You would get (to you) unexpected effects
from bidi processing.

/Kent Karlsson


 (U+2028).  In retrospect, there might have been considerable
 advantages to forcing the ASCII- UTF-8 distinction by requiring
 that UTF-8 strings all start with a BOM, but it is far too late
 for that (and probably not, on balance, a good idea despite its
 advantages).  So I don't see how to get there from here -- we
 are stuck, for historical reasons, with CRLF on the wire as what
 The Unicode Standard calls NLF (incidentally, Unicode 5.0,
 Section 5.8, provides significant insight into the complexity of
 this problem and probably should have been referenced.  It would
 be even more helpful had Table 5-2 included identifying CRLF as
 a standard Internet wire form of NLF, not just binding that
 form to Windows.
___
Ietf mailing list
Ietf

RE: Last Call: draft-klensin-net-utf8 (Unicode Format for Network Interchange) to Proposed Standard

2008-01-14 Thread Karlsson, Kent
Frank Ellermann wrote:

 John C Klensin wrote:
 
  It is ambiguous for HT.
 
 Yes, but we typically don't care about this in protocols as
 long as it behaves like one or more spaces.  I think that's

Well, they don't exactly behave like a sequence of spaces.
(See below.)

 the idea of WSP = SP / HTAB ; white space in RFC 4234bis,
 waiting for its STD number.
 
 We talked about the 4234bis issue of trailing white space,
 which could cause havoc when it is silently removed, and a
 really empty line is not the same as an apparently empty
 line (i.e. CRLF CRLF vs. CRLF 1*WSP CRLF).
 
 A similar robustness principle would support to accept old
 HTAB-compression or HTAB-beautification (e.g. as first

Doing what you call 'old HTAB-compression' is a bad idea,
for several reasons (that I don't detail here, but for one:
see below).

 character in a folded line).  In other words WSP, not only
 SP.  It is clear that the outcome is ambiguous, but in some
 protocols I care about (headers in MIME, mail, and news)
 *WSP or 1*WSP are acceptable.   Admittedly it is a pain when
 signatures need white space canonicalization.  But replacing
 *WSP by *SP would only simplify this step, not get rid of it.
 
  [About CRLF]
  Unicode 5.0, Section 5.8, provides significant insight into
  the complexity of this problem and probably should have
  been referenced.  It would be even more helpful had Table
  5-2 included identifying CRLF as a standard Internet wire
  form of NLF, not just binding that form to Windows.
 
 Indeed, this chapter offers significantly *broken* insight
 for our purposes.  What they found was a horrible mess, then

Hmm, a minor mess, but not that horrible.

 they introduced wannabe-unambiguous LS + PS, and what they
 arrived at was messier than before.  Claiming that CRLF is 

They were introduced in Unicode 1.1, long before the text for
section 5.8 was drafted (originally as UTS 10).

One important point that you have missed is that LS and PS,
and the difference between THEM, are essential to the bidi
algorithm. What is or may be done with other NLFs is basically
a hack (most NLF are treated as if they were PS).

Note also that two LSes in sequence don't make a PS...

Assume for the moment that we were using only LS (as John
has suggested as a possible ideal). This would imply that
the bidi algorithm would consider the ENTIRE document,
however many thousands of pages it may span, as a SINGLE
paragraph. Thus, if there is any bidi processing, none of
the text can be displayed until the entire text has been
read in and bidi levelled as a whole, etc. That may have
some display effects (bidi controls, LRE, RLE, LRO, RLO,
span at most to the end of a paragraph, so if paragraph
ends are replaced by LS, the bidi controls may span more
text than they did originally). The hack is to regard all
NLFs except LS, VT, and FF as PS. Since CRLF (say) may occur
inside of what is actually a paragraph, this has some display
effects (limiting bidi controls range more than they were
originally), but at least bidi processing can be done piece
by piece of the text.

Bidi control codes are not talked about in the document
we are discussing...

 windows is odd for DOS + OS/2 users, it is also at odds
 with numerous Internet standards - precisely the reason why
 we need your draft.  
 
 The chapter talks about line and paragraph separators without
 mentioning relevant ASCII controls such as RS.  On the other

RS (and GS and FS) are regarded the same as PS for bidi
processing, even though they are not mentioned in section 5.8.
But I would agree that using RS, GS, FS, or (even worse) IND
would be aberrant.

US is regarded as similar to a HT for bidi processing (so should
HTJ, but isn't by default). Note that HT is NOT treated the same
as a sequence of spaces for bidi processing. HT always has the
paragraph bidi level, which is not necessarily the case for spaces.
This DOES affect display, in that HT always moves according to
the paragraph level, while spaces may (and often do) move opposite
to the paragraph level. So **DON'T** imply that HT should be 
replaced by spaces; such a replacement WILL have ill display
effects.

However, I do think that these four characters should not be
used.

 hand it mentions MS Word interna which are nobody's business
 outside of MS Word.

I guess you refer to VT (LINE TABULATION). The real reason it
is mentioned is that the C and C++ standards give a special
escape for it (\v, which according to C implies a return to
beginning of line). If it were not for that, I would agree
that VT is not very interesting (though it does provide for
a hack to distinguish line separation from paragraph separation
by ignoring the tabulation aspect of VT, also for pure 8-bit
character encodings).

/Kent Karlsson


 It is interesting, but IMO unusable for net-utf8.
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [anonsec] review comments on draft-ietf-btns-prob-and-applic-06.txt

2008-01-14 Thread Stephen Kent

At 6:00 PM -0600 1/11/08, Nicolas Williams wrote:

...

Finally, multi-user systems may need to authenticate individual users to
other entities, in which case IPsec is inapplicable[*].  (I cannot find
a mention of this in the I-D, not after a quick skim.)

[*] At least to my reading of RFC4301, though I see no reason why a
system couldn't negotiate narrow SAs, each with different local IDs
and credentials, with other peers.  But that wouldn't help
applications that multiplex messages for many users' onto one TCP
connection (e.g., NFS), in which case even if my readinf of RFC4301
is wrong IPsec is still not applicable for authentication.


IPsec has always allowed two peers to negotiate multiple SAs between 
them, e.g., on a per-TCP connection basis. Ipsec does support 
per-user authentication if protocol ID and port pairs can be used to 
distinguish the sessions for different users. So, if you want to 
restrict the cited motivation to applications that multiplex 
different users onto a single TCP/UDP session, that would be accurate.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [anonsec] review comments on draft-ietf-btns-prob-and-applic-06.txt

2008-01-14 Thread Stephen Kent

At 2:06 PM -0600 1/14/08, Nicolas Williams wrote:

...

Ipsec does support

 ^
You're slipping :) :)


oh my!


  per-user authentication if protocol ID and port pairs can be used to

 distinguish the sessions for different users.


I thought this was feasible (see above) but I thought the RFC4301 model
didn't quite deal with this (or at least Sam once convinced me that the
name selector of the SPD didn't quite work the way I would think it
should).  I am glad to be wrong on this.

(So then, the name selector in the SPD can be used to select the local
ID and credentials?)


The following text from pages 28-29 of 4301 seems pretty clear on 
this point. I have marked some of the text as bold, to call attention 
to especially relevant parts.


  - Name:  This is not a selector like the others above.  It is not
acquired from a packet.  A name may be used as a symbolic
identifier for an IPsec Local or Remote address.  Named SPD
entries are used in two ways:

 1. A named SPD entry is used by a responder (not an initiator)
in support of access control when an IP address would not be
appropriate for the Remote IP address selector, e.g., for
road warriors.  The name used to match this field is
communicated during the IKE negotiation in the ID payload.
In this context, the initiator's Source IP address (inner IP
header in tunnel mode) is bound to the Remote IP address in
the SAD entry created by the IKE negotiation.  This address
overrides the Remote IP address value in the SPD, when the
SPD entry is selected in this fashion.  All IPsec
implementations MUST support this use of names.

 2. A named SPD entry may be used by an initiator to identify a
user for whom an IPsec SA will be created (or for whom
traffic may be bypassed).  The initiator's IP source address
(from inner IP header in tunnel mode) is used to replace the
following if and when they are created:

- local address in the SPD cache entry
- local address in the outbound SAD entry
- remote address in the inbound SAD entry

Support for this use is optional for multi-user, native host
implementations and not applicable to other implementations.
Note that this name is used only locally; it is not
communicated by the key management protocol.  Also, name
forms other than those used for case 1 above (responder) are
applicable in the initiator context (see below).


So, although support for this capability (for initiators) is not 
strictly required for a multi-user system, we do explain how it is 
intended to work in those systems.



   So, if you want to
 restrict the cited motivation to applications that multiplex
 different users onto a single TCP/UDP session, that would be accurate.


I don't want to restrict it only to such applications, _no_.


Then you should include the sort of text you provided below, to 
justify why BTNS is appropriate in these circumstances, since it is 
not accurate to say that IPsec cannot provide the required support.



...

I think the examples that you object to can remain in the I-D, but it
should be clear that BTNS is not 'RECOMMENDED' (nor 'NOT RECOMMENDED')
for those -- that those examples are speculative.  Provided that such
examples are feasible.


my only requirement is that the motivation text be factually accurate.

Steve___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-net-utf8 (Unicode Format for Network Interchange) to Proposed Standard

2008-01-10 Thread Kent Karlsson
 
Stephane Bortzmeyer wrote:
  Upon reciept, the following SHOULD be seen as at least line ending
  (or line separating), and in some cases more than that: 
  
  LF, CR+LF, VT, CR+VT, FF, CR+FF, CR (not followed by NUL...),
  NEL, CR+NEL, LS, PS
 
 The whole point of the Internet-Draft on Net-UTF8 is to limit the size
 of the zoo of line endings. Accepting everything in Unicode which
 looks like a line ending seems strange to me. Do you know any
 Internet *protocol* which accepts several line endings? (Some Internet
 *applications* do so, in the name of the robustness principle, but for
 a protocol, I think it is a really bad idea.)

Please reread my comment. I wrote:
| Apart from CR+LF, these SHOULD NOT be emitted for net-utf8, unless
| that is overriden by the protocol specification (like allowing FF, or CR+FF).
| When faced with any of these in input **to be emitted as net-utf8**, each
| of these SHOULD be converted to a CR+LF (unless that is overridden
| by the protocol in question).

I.e. this is about conversion/normalisation of input that is *TO BE*
sent as Net-UTF-8.

As for the recieving side the same considerations as for the (SHOULD)
requirement (point numbered 4 on page 4) for NFC in Net-UTF-8 applies.
The reciever cannot be sure that NFC has been applied. Nor can it be
sure that conversion of all line endings to CR+LF (there-by loosing
information about their differences) has been applied.

/kent k


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-net-utf8 (Unicode Format for NetworkInterchange) to Proposed Standard

2008-01-10 Thread Kent Karlsson
 

 -Original Message-
 From: Bob Braden [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, January 09, 2008 6:15 PM
 To: ietf@ietf.org; [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: RE: Last Call: draft-klensin-net-utf8 (Unicode 
 Format for NetworkInterchange) to Proposed Standard
 
   * Upon reciept, the following SHOULD be seen as at least 
 line ending
   * (or line separating), and in some cases more than that: 
   * 
   * LF, CR+LF, VT, CR+VT, FF, CR+FF, CR (not followed by NUL...),
   * NEL, CR+NEL, LS, PS
 
 I don't know whether you regard this as relevant, but I 
 believe that in
 the ARPAnet/early Internet days, FF was not regarded as ending a line.
 FF only moved the platen one line at the current character 
 position.  I
 believe that this was a formalization of the mechanical operation of a
 teletype machine.

Yes, but hardly relevant these days. Note that that interpretation
is similar to raw LF. Still, in many systems LF by itself is line
ending (cooked LF if you like). Likewise CR, by itself, is line
ending in some systems. Note that CR+FF is listed above as a
single line end, or rather a single line separator (cmp. LF).
(So this is cooked mode for LF and FF, for those who remember
the lpr command...; or even still use it...)

See section 5.8 of http://www.unicode.org/versions/Unicode5.0.0/ch05.pdf.
XML also (for conversion reasons) regards CR+NEL as a single line end.

B.t.w. many programs (long ago) had a bug that deleted the last
line if if was not ended with a LF. As an additional comment,
I think that the Net-UTF-8 document should state that the
last line need not be ended by CR+LF (or any other line
end/separator), though it should be. This is just as a matter of
normalising the line ends for Net-UTF8, not for UTF-8 in general.

/kent k


  This interpretation was also, I believe,
 incorporated by Jon Postel in the rules for NVT and for RFC
 formatting.
 
 I can't believe I am reopening this old topic... ;-(
 
 Bob Braden
 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Last Call: draft-klensin-net-utf8 (Unicode Format for Network Interchange) to Proposed Standard

2008-01-09 Thread Kent Karlsson
Comment on draft-klensin-net-utf8-07.txt:

--

Network Virtual Terminal (NVT) occurs first in Appendix A.
The explanation of the abbreviation should (also) be given at
the first occurence of NVT in the document.

--

Section 2, point 2, Line-endings...

   discussion.  The newer control characters IND (U+0084) and NEL
   (Next Line, U+0085) might have been used to disambiguate the

I have a hard time figuring out what IND was supposed to be used for,
but I don't think it was for line endings. Chain printer font change is
the closest I get... (http://www.freepatentsonline.com/3699884.html).

NEL is used in EBCDIC originally (IIUC), and still used in EBCDIC...

The description might have been used to disambiguate is more
appropriate for U+2028 and U+2029.

--

   it, lines end in CRLF and only in CRLF.  Anything that does not
   end in CRLF is either not a line or is severely malformed.

The sentence starting with Anything seems  severely malformed...
You don't really meant to say Anything, I hope. Using other line
ending or line separation conventions perhaps. And severely
malformed, I hope you did not mean that either. is lacking in
conversion to 'net-utf8'/'net-Unicode' perhaps.

To be rescrictive in what one emits and permissive/liberal in
what one receives might be applicable here.

Upon reciept, the following SHOULD be seen as at least line ending
(or line separating), and in some cases more than that: 

LF, CR+LF, VT, CR+VT, FF, CR+FF, CR (not followed by NUL...),
NEL, CR+NEL, LS, PS
where
LF  U+000A
VT  U+000B
FF  U+000C
CR  U+000D
NEL U+0085
LS  U+2028
PS  U+2029

even FS, GS, RS
where
FS  U+001C
GS  U+001D
RS  U+001E
should be seen as line separating (Unicode specifies these as having bidi
property B, which effectively means they are paragraph separating).

Apart from CR+LF, these SHOULD NOT be emitted for net-utf8, unless
that is overriden by the protocol specification (like allowing FF, or CR+FF).
When faced with any of these in input **to be emitted as net-utf8**, each
of these SHOULD be converted to a CR+LF (unless that is overridden
by the protocol in question).

--

Section 2, point 3:

You have made an exception for FF (because they occur in RFCs?).
I think FF SHOULD be avoided, just like VT, NEL, and more (see above).
Even when it is allowed, it, and CR+FF, should be seen as line separating.

You have also (by implication) dismissed HT, U+0009. The reason for this in
unclear. Especially since HT is so common in plain texts (often with some
default tab setting). Mapping HT to SPs is often a bad idea. I don't think a
default tab setting should be specified, but the effect of somewhat (not wildly)
different defaults for that is not much worse than using variable width fonts.

SP, U+0020, is nowadays not seen as a control character, not even in
your own text... (same paragraph).


--

   However, because they were optional in NVT applications
   and this specification is an NVT superset, they cannot be prohibited
   entirely. 

Why not? Why must this be a strict NVT superset? I think it would be rather
important to rule these strange beasts out from net-utf8. These were really
ASCII (ISO 646) features, but have been ruled out much before Unicode.

--

   The most important of these rules is that CR MUST NOT
   appear unless it is immediately followed by LF (indicating end of
   line) or NUL.

I don't see how that follows (read: that does not follow).

--

 [ISO10646]
  International Organization for Standardization,
  Information Technology - Universal Multiple- Octet Coded
  Character Set (UCS) - Part 1: Architecture and Basic
  Multilingual Plane, ISO/IEC 10646-1:2000, October 2000.

That seems a bit old... Better with the current revision:

ISO/IEC 10646:2003   Information technology -- Universal Multiple-Octet Coded
Character Set (UCS)

with the amendments (which I don't think you should reference explicitly):
ISO/IEC 10646:2003/Amd 1:2005  Glagolitic, Coptic, Georgian and other characters
ISO/IEC 10646:2003/Amd 2:2006  N'Ko, Phags-pa, Phoenician and other characters
(and more amendments in the works).

--



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


review comments on draft-ietf-btns-prob-and-applic-06.txt

2008-01-07 Thread Stephen Kent
I have reviewed this document as part of the security directorate's 
ongoing effort to review all IETF documents being processed by the 
IESG.  These comments were written primarily for the benefit of the 
security area directors.  Document editors and WG chairs should treat 
these comments just like any other last call comments.


This document is not well structured, i.e., in many places it 
rambles. The document has more of an architectural framework feel to 
it than the title suggests. It spends too much time saying how BTNS 
will work, rather than focusing on the nominal topic of the document, 
i.e., the problem to be solved and the anticipated applicability of 
the solution. It never provides a clear, concise characterization of 
the problem to be solved, and why the functionality offered by 
BTNS-IPsec is the preferred way to solve the problem. (I believe this 
problem  arises because, from the beginning, there were been 
multiple, independent motivations for the BTNS work and the WG never 
reconciled them.)


There seem to be two types of problems/solutions that motivate BTNS, 
both starting with the assumption that use of IPsec is the goal (an 
assumption that needs to be justified itself, as noted below). The 
solutions are presented before examples of the problems, which does 
not help matters, but I'll characterize the problems in terms of the 
solutions, in keeping with the style of the I-D:


  - creating IPsec/IKE SAs w/o authentication, for use in contexts where
it is perceived that IPsec is not used because the effort to deploy an
authentication infrastructure compatible with IKE is too great a burden
AND the confidentiality and integrity offered by unauthenticated SAs is
 	better than nothing. Since IKE supports use of passwords, 
this presumes
	that the threshold for what constitutes too great a burden is 
pretty low,

but this is not explicitly stated. Also, the BGP use case was disputed,
when this work started, and has proven to be a bad example given
	continuing developments, but it persists in the document. 
There is also a
	not-well-articulated argument that TLS/DTLS is not a suitable 
alternative,
	presumably because those protocols do not protect the 
transport protocol

per se. It's true that IPsec does a better job here, but the need for
	using it (vs. TLS) in such circumstances does not seem to be 
widely accepted.


  - creating IPsec/IKE SAs w/o authentication, for use in contexts where an
application will perform its own authentication, but wants the layer 3
	confidentiality, integrity and continuity of authentication 
offered by ESP.
	Here a critical part of the argument is that these 
applications cannot use
	the authentication provided by IKE, but the explanation for 
this is poor. For
	example there is no recognition of the use of EAP 
authentication methods with
	IKE. The text also does not address the possibility that a 
suitable API could

allow an application to acquire and track the ID asserted during an IKE
	exchange, in lieu of the unauthenticated SA approach that is 
being motivated.


The document fails to introduce important concepts like continuity of 
authentication and channel binding near the beginning. If leap of 
faith authentication is important enough to be included, then it too 
needs to be described early in the document. The document never 
provides a clear, concise definition of channel binding, and the 
definition of LoF is mostly by example. The failure to define these 
terms early in the document leads to ambiguity and confusion in the 
problem statement sections.


Several of the examples provided in the applicability section do not 
seem congruent with security efforts in the relevant areas. I 
mentioned the BGP connection example above, which is even less 
relevant today, given the ongoing TCPM work on TCP-AO.  There is also 
an assertion that BTNS-IPsec is a good way to protect VoIP media, yet 
the RTP folks never believed that and the RAI area has recently 
reaffirmed its commitment to use of SRTP for this purpose, with DTLS 
for key management. Another questionable example is the suggestion to 
use both BTNS-IPsec and TLS to protect client/server connections 
against TCP RST attacks. This is theoretically a valid use of 
BTNS-IPsec, but there is no indication that web server operators 
believe this is a necessary capability, as the I-D argues.


The security considerations section is too long, mostly because much 
of the material should be earlier, e.g., the CB discussion.  One 
might also move the rekeying attack example (which I expanded to be 
more accurate) to the CB document, and just reference the notion here.


I am unable to attach a copy of the I-D, with MS Word charge tracking 
for detailed comments and edits, because it is too big for these 
lists. A copy of that file was sent to tge cognizant Security AD, WG 
chairs, and authors.


Steve


Re: Last Call: draft-shimaoka-multidomain-pki-11.txt

2007-12-04 Thread Stephen Kent

At 7:34 PM +0100 12/4/07, Martin Rex wrote:

The document


 - 'Memorandum for multi-domain Public Key Infrastructure
Interoperability'

 draft-shimaoka-multidomain-pki-11.txt as an Informational RFC

creates the impression that trust anchors must always be
self-signed CA certificates.

What is a trust anchor MUST remain completely up to local policy (which
might be a client-local policy in some scenarios), there should
be NO restriction whatsoever what can be configured as a trust anchor.

The idea of a trust anchor is that we trust the (public) key of the
trust anchor, that the PKI implementation may perform a reduced
(certificate) path validation only up to the trust anchor.
The management of trust anchors is also completely a local (policy) issue,
i.e. what keys are considered trust anchors, how they are distributed,
managed and updated.

I am violently opposed to the documents requirements and restrictions
what may an what may not be a trust anchor certificate.  Document
published by the IETF (even if just Informational) should neither
make unconditional restrictions (MUST NOT) nor unconditional requirements
(MUST) for the selection of trust anchors.  Instead, Protocols and
implementations SHOULD support the use of arbitrary trust anchors
as desired by local policy.

-Martin



Martin,

You are right that a TA need not be a self-signed cert, although this 
is the most common format for TA representation.


Your statement about how a TA allows a relying party to perform a reduced
(certificate) path validation is confusing. I believe that we always 
assume that cert path validation terminates at a TA for the RP. We 
both agree that the selection and management of TAs is purely a local 
matter for each RP.


In general I do not worry too much about what an informational RFC 
that is not the product of a working group says. However, looking at 
the abstract for this document I do see some words that cause me some 
concern, i.e., The objective of this document is to establish a 
standard terminology for interoperability of multi-domain Public Key 
Infrastructure (PKI), where each PKI Domain is operated under a 
distinct policy ...


We ought not make such strong statements in a document of this sort. 
I agree that the authors need to soften the wording to indicate that 
this document defines terminology to describe multi-domain PKI 
models, as an aid to discussing issues in these contexts.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-pkix-rfc3280bis (Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile) to Proposed Standard

2007-12-03 Thread Stephen Kent
Sam Hartman identified an issue with one name type (URI) that may 
appear in the Subject/Issuer alternative names, when applying the 
Name Constrains extension to such names.  The issue arises when the 
URI does not contain an authority component (a host name in a DNS 
name or e-mail address), because the 3280bis description of how to 
apply name constraints to URIs assumes the presence of such a 
component.


The WG is working on text to address this issue in response to the 
last call comment .


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [PMOL] Re: A question about [Fwd: WG Review: Performance Metrics atOther Layers (pmol)]

2007-11-15 Thread Stephen Kent

Joe,

This discussion  seems to have moved from a discussion of crypto use 
on home/office computers, to use in routers. There is no good 
motivation for other than edge (CPE?) routers to make use of IPsec 
for subscriber traffic.  We know, from discussions with operators, 
that use of IPsec to protect BGP is a non-starter, because of where 
in the router the processing would be done (given current router 
designs).  In any case, use of IPsec by routers is a very different 
topic that use in home/office computers and ought not be brought into 
this discussion.


As for the original topic, yes, performance hits come in various 
flavors when we discuss crypto protocol use. For example, there was a 
good paper at NDSS a few years ago that showed how marshalling of 
data in  SSL implementations was a very big part of the performance 
hit. Nonetheless, the bottom line is that for mainstream users, most 
of us are not convinced that performance is the primary reason for 
not using crypto.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [PMOL] Re: A question about [Fwd: WG Review: Performance Metrics atOther Layers (pmol)]

2007-11-14 Thread Stephen Kent

Joe,

I disagree with your suggestion The software performance of security 
protocols has been the more substantial issue, and is likely to 
continue to be for the forseeable future.


I suspect that most desktop users do not need hardware crypto for 
performance.  Irarely if ever drive my GiGE interface at its line 
rate. With fast processors, especially multi-core processors, we have 
enough cycles to do symmetric crypto at data rates consistent with 
most application demands for individual users.  Public key operations 
for key management are usually low duty cycle, so they too can be 
accommodated.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: The Internet 2.0 box Was: IPv6 addresses really are scarce after all

2007-08-23 Thread Stephen Kent

At 11:23 AM -0700 8/23/07, Hallam-Baker, Phillip wrote:
If we can meet the needs of 80% of Internet users with some form of 
shared access there will be more addresses left for the 20% with 
greater needs.


I suspect that the actual percentages are more like 95% and 5%.

My Internet use is certainly not typical, it is considerably more 
intensive than the median user.


And as for the claim that I would saddle the Internet with a 1970s 
technology, I don't think that DNS counts. For a start the SRV 
record only appeared in the late 90s. It is much easier to rant 
against something when you don't bother to find out what it is.


The DNS is a 1980's technology. We used hosts.txt prior to that.

Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Review of draft-hartman-webauth-phishing-05

2007-08-22 Thread Stephen Kent

Henning,

Some WGs issue Informational RFCs that represent WG consensus, but 
which are not viewed as suitable Standards track documents, for 
various reasons.  For example, RFC 3647 is one of the most widely 
cited of the PKIX RFCs, yet it is Informational because its a policy 
and procedures document, not a protocol document. Some WGs also 
choose to publish a requirements spec as Informaitonal, even thought 
the document that will meet the requirements will itself be standards 
track.


So I think we have a wide variety of document types that wind up as 
informational RFCs, even today.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-shirey-secgloss-v2-08.txt

2007-08-09 Thread Stephen Kent

At 9:32 AM -0400 8/9/07, David Harrington wrote:

Hi,

The issue was raised during ISMS WGLC that there is a difference
between our use of the word authenticate and the glossary in RFC2828.
Since ISMS extends SNMPv3, ISMS is using terminology consistent with
the SNMPv3 standard, which reflects English usage.


Could you identify the specific I-D with which this mismatch between 
the 2828(bis) definition  arises?



I think re-defining the word authenticate is not a good idea. I think
it will not help the IETF write clear and unambiguous specifications
to redefine words for IETF usage that are already clearly defined in
English. if we want new keywords, then the IETF should invent new
terms, not redefine existing terms.


I would not make that argument in general, because technologies very 
often assign special or narrow definitions for common English words. 
In the IETF context, tunnel might be a good example, peer, etc.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-09 Thread Stephen Kent

At 6:35 AM -0700 8/9/07, Bill Manning wrote:

...
  The RIRs are working to enable clean transfer of address space

 holdings, using X.509 certs. While one could do what what Harald
 suggested, the new address space holder would have to worry about HP
 revoking the cert it issued to effect the transfer. A cleaner model
 would call for HP to effect the transfer through a registry, so that
 HP is no longer in the cert path.



and they would then not have to worry about the RIR revoking the cert?

--bill


The RIRs are recognized as neutral, primary address space allocators 
who have contractual relationships with the folks to whom they 
allocate addresses. I think it might be more attractive to the new 
holder of address space to have a relationship with an RIR vs. the 
former address space holder, a company that might be acquired by 
another company, change business orientation, declare bankruptcy, ...


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-09 Thread Stephen Kent

At 9:03 AM -0700 8/9/07, Bill Manning wrote:

...
  The RIRs are recognized as neutral, primary address space allocators

 who have contractual relationships with the folks to whom they
 allocate addresses. I think it might be more attractive to the new
 holder of address space to have a relationship with an RIR vs. the
 former address space holder, a company that might be acquired by
 another company, change business orientation, declare bankruptcy, ...



...are recognsied...  - by whom?
... I think...  - may not be a shared point of view.


On the IETF list many messages represent points of view that are not 
widely shared :-).




One might assume that when GE issues certs for subnets of its
address space, that a contractual relationship would exist.


I agree that a legal agreement with a large company (e.g., GE or HP) 
might be just as good a guarantee as an analogous agreement with an 
RIR.




And I am pretty sure that GE has been around for quite a few years
longer than all the RIR's combined.


It has, but it has been in and out of various business areas as the 
CEO and Board have decided what is and is not a good place to make 
money as times change.



RIR's are legal entities, just like other companies... and are subject
to the same problems that companies have, e.g. aquired, changed
orientation, bankruptcy... etc.  They are NOT special.



I disagree with your suggestion that the RIRs are not special. They 
are recognized as being special by ICANN.


I will not pursue a rat hole  discussion on whether ICANN is special :-).

Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-09 Thread Stephen Kent

At 11:40 AM -0700 8/9/07, Bill Manning wrote:

O...


ICANN is also a legal entity, with the same vulnerabilities
as all other companies including RIR's... which was my point.
Special is reserved for governments... :)


The U.S. Dept. of Commerce recognizes ICANN exclusively (via an MoU) 
with regard to certain Internet functions, so maybe that makes ICANN 
special, once removed :-).


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IPv4

2007-08-08 Thread Stephen Kent

At 4:36 PM +0200 8/8/07, Iljitsch van Beijnum wrote:

On 8-aug-2007, at 12:07, Harald Alvestrand wrote:

Routing certificates are simple. If HP sells (lends, leases, 
gifts, insert-favourite-transaction-type-here) address space to 
someone, HP issues a certificate (or set of certificates) saying 
that this is how HP wants the address space to be routed; the fact 
that the routes point to non-HP facilities is nothing that the 
route certificate verifiers can (or should) care about.


If this is how it works, then apparently you CAN de facto own 
address space after all.




The RIRs are working to enable clean transfer of address space 
holdings, using X.509 certs. While one could do what what Harald 
suggested, the new address space holder would have to worry about HP 
revoking the cert it issued to effect the transfer. A cleaner model 
would call for HP to effect the transfer through a registry, so that 
HP is no longer in the cert path.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure

2007-07-10 Thread Stephen Kent

At 10:54 AM +0900 7/10/07, Masataka Ohta wrote:

...
Stephen Kent wrote:


 The notion of CA compromise and ISP comprise are not completely
 comparable, which makes your comparison suspect.


As I already mentioned, social attacks on employees of CAs and
ISPs are equally easy and readily comparable.


the attacks may be comparable but not all of the effects are the same.


  Also, the security implications of errors (or sloppiness) by ISPs is

 very different from that of CAs, so I don't think your comparison makes
 sense in that regard as well.


Given the sloppiness of current DNS management, secure DNS CAs, which
is an PKI, will be no different from that of ISPs.


DNSSEC is very analogous to a PKI in many respects, but it too is not 
quite the same. A major difference is that the DNS hierarchy is 
authoritative for the bindings it establishes, whereas the common, 
trusted-third party CA model involves organization who are 
authoritative for nothing.



It hard for you to recognize that most, if not all, of the effort
of IETF security area has been wasted in vain.


As opposed to wasting efforts constructively?


But that's the
reality.

Masataka Ohta



It's so generous of you to provide the rest of us with your wisdom 
with regard to the reality of security. I'm not sure we are 
deserving, and so maybe it would be fairer to not share so much.


Steve___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure (was Re: Updating the rules?)

2007-07-10 Thread Stephen Kent

At 1:13 PM -0700 7/10/07, Douglas Otis wrote:

On Jul 8, 2007, at 10:34 PM, Eliot Lear wrote:


This can be said of any technology that is poorly managed.


So, you merely believe that the infrastructure of PKI is well managed.


In all but a single instance I have no evidence to the contrary.  
The one case of an exploit was extremely well publicized and 
ameliorated within days.  And that was years ago.


Trust Models.

Once a CA is vetted, it can be leveraged as a point of trust.  The 
trust is of an association with a URL validated by the certificate.


your reference to a URL is a very specialized (not generic) 
description of how one might interpret the security services 
associated with a CA.


Steve


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PKI is weakly secure (was Re: Updating the rules?)

2007-07-09 Thread Stephen Kent

At 6:36 PM +0900 7/7/07, Masataka Ohta wrote:

Keith Moore wrote:


Also from the draft:
At least for the strong security requirement of BCP 61 [RFC3365], the
Security Area, with the support of the IESG, has insisted that all
specifications include at least one mandatory-to-implement strong
security mechanism to guarantee universal interoperability.

I do not think this is a factual statement, at least when it comes to
HTTP, which is where my interest lies.


 note that it is not necessary to have at least one
 mandatory-to-implement strong security mechanism to guarantee


What, do you mean, strong security?

Given that CAs of PKI can be compromised as easily as ISPs
of the Internet, PKI is merely weakly secure as weakly as
the plain Internet.

Masataka Ohta


The notion of CA compromise and ISP comprise are not completely 
comparable, which makes your comparison suspect.


Also, the security implications of errors (or sloppiness) by ISPs is 
very different from that of CAs, so I don't think your comparison 
makes sense in that regard as well.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Questions on my role as the 2007-8 nomcom chair and the various discussions on the IETF list

2007-06-14 Thread Stephen Kent

At 12:29 AM -0700 6/13/07, Lakshminath Dondeti wrote:

Folks,

One person has voiced concerns on my taking a strong public 
position in the Should I* opinions be afforded a special status? 
thread while serving as the chair of the 2007-8 nomcom.  Perhaps 
there are others with similar concerns.


The role of the nomcom chair is clearly described in 3777 and I 
intend to conform to that RFC.  Beyond my assigned duties, I am an 
ordinary contributor at the IETF and my opinions count no more than 
other contributors' opinions.  For those who don't want to rely on 
my assurance, RFC 3777 comes to their rescue and mine.  The adviser 
and the at least three liaisons are expected to ensure that the 
chair in particular executes the assigned duties in the best 
interests of the IETF community.


If you have further questions, please do not hesitate to contact me.

I think there is nothing wrong with your dual roles as individual 
participant in IETf discussions and as NOMCOM chair, especially since 
the chair is a non-voting member of the NOMCOM.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Withdrawal of Approval and Second Last Call: draft-housley-tls-authz-extns

2007-04-11 Thread kent
On Wed, Apr 11, 2007 at 01:54:53PM +0200, Brian E Carpenter wrote:
 Ted,
 
 Well, if IPR owners don't actually care, why are they asking people to
 send a postcard?  It would seem to be an unnecessary administrative
 burden for the IPR owners, yes?
 
 My assumption is that they care if the party that fails to send
 a postcard is one of their competitors. That's what the defensive
 clauses in these licenses are all about, afaics.


I'm not sure I understand the Upon request...will provide clause. 
Actually, I'm sure I don't understand it...

Does it in any significantly enforcable way *require* RedPhone Security to do
anything? If so, then all the competitor has to do is send a postcard, so the
defensive value is effectively zero -- anyone who is a significant 
competitor can certainly afford a postcard.

If, on the other hand, it imposes no real requirements on RedPhone Security, 
what does it do?  Why is it there?





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'A Lightweight UDP Transfer Protocol for the the Internet Registry Information Service' to Proposed Standard (draft-ietf-crisp-iris-lwz)

2006-08-16 Thread kent crispin
On Wed, Aug 16, 2006 at 11:55:58AM -0400, Andrew Newton wrote:
 Harald Alvestrand wrote:
 There's nothing in the document that says if you want to send 4000 
 requests, and 70 out of the first 100 get lost, you should slow down 
 your sending rate to that server.
 
 I just checked the simple user-drive, cli client I wrote and it doesn't 
 retransmit at all (perhaps not the best UI experience).  I'll check with 
 the other implementers to see what they did.  But you are right, guidance 
 needs to be given, especially if these things get embedded into automated 
 scripts.

s/if/when/

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [TLS] Review of draft-housley-tls-authz-extns-05

2006-05-24 Thread Stephen Kent

Russ,

I concur with Pasi's observations.  I don't recall seeing a similar 
structure in an RFC, where a part is informative, in what is 
otherwise a standards track document.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Fwd: TLS authorizations draft

2006-05-22 Thread Stephen Kent

At 10:16 AM -0400 5/18/06, Russ Housley wrote:
I received this note from Angelos Keromytis regarding the 
draft-housley-tls-authz-extns document.  I plan to accommodate this 
request unless someone raises an objection.


Russ



OK, I'll object :-).

KeyNote has no IETF status, to the best of my knowledge.  It is 
closely aligned with the SDSI/SPKI work for which the IETF created a 
WG, but ultimately rejected as a standards track effort.  So, I find 
it inappropriate to extend this standards track document to include 
support for a technology that, via a tenuous link, never made it to 
standards track status in the IETF.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: FW: IETF Last Call under RFC 3683 concerning JFC (Jefsey) Morfin

2006-01-22 Thread kent crispin
On Sun, Jan 22, 2006 at 12:20:15PM +0100, Anthony G. Atkielski wrote:
 
 Eventually you end up with multiple groups on a list: those who
 irritate others, those who want to censor the ones they find
 irritating, and--sometimes--a minority of people who are grown-up
 enough to stay out of both of these groups and continue their normal
 work, cheerfully ignoring the children at play on the list.

  When this happens, I've only ever seen two possible outcomes.  Either
  the smoke generators are ejected, or the productive members leave.
 
 There's a third possible outcome: The productive members are smart,
 they ignore the smoke, and they continue to work efficiently. 

A fundamental aspect of group decision making is defining the group making
the decision.  Uncoordinated individual procmail rulesets don't cut it if
there is any kind of accountability requirement. 

IETF lists are frequently required to make group decisions, and those
decisions do have an accountability requirement.  Decisions have to stand on the
public record, not the filtered view of some hypothetical elite.  When an IESG
member examines the archive of a list to review some decision, that review
won't tell the IESG member who is filtering who.  

I've been on lists where I thought there was limited and mostly intelligent
traffic, until I looked at the archive. 

 But the
 productive members do have to be _smart_, and unfortunately that's
 more the exception than the rule, even on lists where the members like
 to believe themselves smart.

Would that things were so simple.

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: EARLY submission deadline - Fact or Fiction?

2005-11-29 Thread kent crispin
On Tue, Nov 29, 2005 at 10:10:39AM -0800, Dave Crocker wrote:
 If I understand correctly, you want to retain a deadline, but give the wg 
 chair authority to override it.  This certainly is reasonable, but I think 
 it is not practical because it adds administrative overhead (and probably 
 delay) in the Internet-Drafts processing mechanism.
 
 A simpler rule is that the working group gets to decide its deadlines and 
 what will be discussed at the meeting.  (All of this is predicated on 
 moving towards fully automated I-D issuance.)

If I understand the two choices you present are:  

1) the wg has to decide to overrule a default deadline; 
2) the wg has to decide on all of its own deadlines.

It seems to me (granted I have limited experience) that the administrative
overhead is actually higher in the second case -- frequently it is simplifies
things to just have a default case. 

Kent

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Diagrams (Was RFCs should be distributed in XML)

2005-11-14 Thread kent crispin
On Mon, Nov 14, 2005 at 09:27:46PM +, Stewart Bryant wrote:
 We need to change the publication process so that we can move away
 from 1960's improvisations to clear diagrams using modern
 techniques. Anything less leaves us without the ability to
 describe protocols using the clearest methods currently
 available.

The clearest methods currently available might include visio diagrams or
powerpoint slides -- at least according to some people. 

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: On PR-actions, signatures and debate

2005-10-06 Thread kent crispin
On Fri, Oct 07, 2005 at 06:46:22AM +0200, Anthony G. Atkielski wrote:
 How?  People who cannot tolerate disagreement are very poor at
 discussing most things, anyway, since they become upset as soon as
 anyone expresses an opinion different from their own.

Toleration of disagreement has almost nothing to do with it.  Instead, it's
more a matter of signal to noise ratio on a limited bandwidth channel.  If
you fill up a list with ignorant drivel, people who don't have time to deal
with drivel will go away, leaving the list to those who produce the drivel. 
That's the problem.  I've seen it happen many times. 

-- 
Kent Crispin 
[EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: UN

2005-09-29 Thread kent crispin
On Fri, Sep 30, 2005 at 05:40:24AM +0200, Anthony G. Atkielski wrote:
 Paul Hoffman writes:
 
  You talk as if you were a root operator and you know what they would
  do. In fact, you run an alternate root, not a real root, so it seems
  that you knowing what real root operators would do is particularly 
  unlikely.
 
 There really isn't any such thing as a real root or alternate root
 on the Internet, just as paper currency and coins have no real
 value.  It all depends on what the majority decides to do.  If
 everyone switches to an alternate root tomorrow, then the real
 root won't matter.

That's sounds good, but in fact, it's utter nonsense.  It's like saying that
the only difference between rowboat and a cargo ship is what people believe
about them.  In fact, if everybody started using one of the alternate roots,
it would simply collapse. 

There is far more to the real root system than just human sentiment.  There
is heavy duty infrastructure, both human and physical, involved.

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: what is a threat analysis?

2005-08-12 Thread Stephen Kent

At 3:08 PM -0700 8/11/05, Ned Freed wrote:

I thought that what Russ asked for was not a threat analysis for
DKIM, but a threat analysis for Internet e-mail, the system that DKIM
proposes to protect. The idea is that only if we start with a
characterization of how and why we believe adversaries attack e-mail,
can we evaluate whether any proposed security mechanism, e.g., DKIM,
is appropriate, relative to that threat analysis.


This is more less my guess as to what's being asked for, although I
disagree with the implication that DKIM proposes to protect email in
its entirety. Regardless, others do not appear to agree and instead
apppear to be doing very different sorts of analyses.

Ned


I agree that DKIM need not protect e-mail in all security dimensions. 
My definition of threat analysis for this context does not require 
that, although I admit the wording could have been clearer.


In any threat analysis, the author decides what threats he/she wants 
to address. The reader decides if the author has omitted any that the 
reader believes are important (to the reader), and thus may reject 
the analysis if threats of interest to the reader were not addresses.


In this case, I believe the informal discussion centered on 
adversaries who wish to inject spam into the Internet e-mail system, 
or who wish to engage in phishing attacks via e-mail. If so, then the 
author merely states that, and proceeds to discuss the motivations 
for such adversaries (what constitutes success for them) and by what 
means they can/do carry out attacks.


With this as background, the author then explains how a proposed set 
of countermeasures prevents such attacks, or makes them harder, etc. 
The reader then evaluates the claims of the author re the 
effectiveness of the proposed countermeasures, given an agreed upon 
threat model.



Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: what is a threat analysis?

2005-08-11 Thread Stephen Kent

Folks,

I thought that what Russ asked for was not a threat analysis for 
DKIM, but a threat analysis for Internet e-mail, the system that DKIM 
proposes to protect. The idea is that only if we start with a 
characterization of how and why we believe adversaries attack e-mail, 
can we evaluate whether any proposed security mechanism, e.g., DKIM, 
is appropriate, relative to that threat analysis.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: what is a threat analysis?

2005-08-10 Thread Stephen Kent

Dave  Michael,

In the DoD environment, a threat analysis for a system identifies the 
classes of adversaries that the author believes are of concern, and 
describes their capabilities and motivations. Russ's three questions 
are a concise way of stating this:

- The bad actors are adversaries.
	- Their capabilities allude to where the adversaries fit 
into the system and what sorts of attacks they may employ of effect 
their goals.
	- Their motivations indicate what they are trying to do, the 
flip side of what are we trying to prevent them from doing.



The term is often used more broadly in the commercial world today, 
encompassing activities that might be better termed threat 
assessment risk analysis etc.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Port numbers and IPv6(was: I-D ACTION:draft-klensin-iana-reg-policy-00.txt)

2005-07-20 Thread Stephen Kent

Phil,


...

Boy are you in for a shock when you try to connect to an ethernet with
802.1x.


I have yet to do so. I do have the facility on my Mac, but I've never 
had to turn it on.



Authentication is being built into the NIC cards. At some point in the
future it will not be possible for any device to connect to an Intranet
without first authenticating itself.


It could happen, but then too it might not.


And it will all have to be 100% transparent to the user.


only when it works :-)


  if folks rely on such distributed enforcement, they will get

 what they deserve.


You are behind the times, single point of failure approaches to security
are out.


layered defenses are a good notion, but mostly when the layers are 
under the same administrative control. all too often people forget 
that relying on the security provided by someone else is a risky 
proposition, as in your example of ISPs providing ingress filtering.



What people are looking to do is to contain attacks from within their
networks. Most large companies now have networks that are large enough
for what is inside the firewall to be at least as worrying as what is
outside.


fair statement




 why not just propose rigorous enforcement of setting the evil bit by
 all network attachment devices, etc?


Sarcasm is not a particularly useful mode of debate, particularly when
you are defending a dogma that has little practical success to recommend
it.


If it weren't a good analogy I don't think I would have received so 
many private responses congratulating me for it :-)


Steve



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Port numbers and IPv6(was: I-D ACTION:draft-klensin-iana-reg-policy-00.txt)

2005-07-20 Thread Stephen Kent

Phil,


  layered defenses are a good notion, but mostly when the layers are

 under the same administrative control. all too often people forget
 that relying on the security provided by someone else is a risky
 proposition, as in your example of ISPs providing ingress filtering.


I would restate your assertion:

It is a bad idea to rely on another party that cannot be held
accountable to you.

We all rely on other parties, the Internet is an example of extended
interdependency. The critical issue is accountability.

So in the question of ingress filtering what I am looking at is
mechanisms to create accountability.


the Internet is composed of Autonomous Systems, and they take the 
first word of the name very seriously. I suspect ISP accountability 
in China, for example, may be as successful as copyright enforcement 
in that region.



  If it weren't a good analogy I don't think I would have received so

 many private responses congratulating me for it :-)


This forum is very much wedded to a security architecture based on a
particular set of academic theories. It is no surprise that you find
support here, any more than the original pontifex maximus would no doubt
receive congratulations on his correct determinationof the auspices from
the entrails of a goat.


I'm more a fan of goat cheese than entrails, but to each his own.

Maybe we would all be happier if you decided to not waste your time 
arguing with the folks in this forum, since we are so out of touch 
and irrelevant to the future of network security, at least as defined 
by the practitioners who appear to emphasize the appearance of 
security over security per se.


Steve


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Port numbers and IPv6(was: I-D ACTION:draft-klensin-iana-reg-policy-00.txt)

2005-07-19 Thread Stephen Kent

At 2:35 PM -0700 7/19/05, Hallam-Baker, Phillip wrote:

  Host and application security are not the job of the network.

They are the job of the network interfaces. The gateway between a
network and the internetwork should be closely controlled and guarded.

Nobody is really proposing embedding security into the Internet backbone
(at least not yet). But the backbone has always had controls enforced
such as ingress and egress filtering.


no, it does not, although many folks might like to believe this is 
true. relying on every ISP to perform such filtering also would be a 
blatant violation of the principle of least privilege anyway.



 Most people think that carriers
should not be allowing people to inject bogons.

Modern security architectures do not rely exclusively on application
security. If you want to connect up to a state of the art corporate
network the machine has to authenticate.


the notion that one has to log into the net is a quaint one, 
perhaps inspired by Windows and the registry. as a mac user, I can't 
relate to this notion, nor can most Unix users, I bet.



In the future every hub, every
router, every NIC will be performing policy enforcement.


if folks rely on such distributed enforcement, they will get what they deserve.

why not just propose rigorous enforcement of setting the evil bit by 
all network attachment devices, etc?



De-perimeterization is not really about removing the firewalls, it is
really about making every part of the *network* into a security control
point.


Firewalls were created, in part, because site admins realized that 
they were unable to perform configuration management for the many 
devices on their nets. a firewall was a device owned by the admin and 
under his control, and if it acted as a gatekeeper for the net, then 
his job was easier. the fact that it is am imperfect gatekeeper is a 
second order issue.


Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: When to DISCUSS?

2005-07-11 Thread Stephen Kent

Yakov,

Ultimately the marketplace will decide, but when a WG provides 
multiple solutions to the same problem it has the potential to 
confuse the marketplace, retard adoption of any solution, interfere 
with interoperability, etc.


Standards ought to avoid confusion, not contribute to it.

Steve

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Voting (again)

2005-04-16 Thread kent crispin
On Fri, Apr 15, 2005 at 04:03:02PM -0700, Hallam-Baker, Phillip wrote:
 
 
  I also believe the nomcom process does provide 
  accountability.  I think that the nomcom interview process 
  was more comprehensive than any job interview process I've 
  gone through.
 
 I think you make a fundamental error here, accountability is determined
 by whether we can get rid of someone, not by how they are appointed in
 the first place.

Oh.  Therefore voting has nothing to do with accountability, since it is a 
mechanism for selecting people in the first place, and therefore the 
premise of this thread is vacuous.

-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: E911 location services (CAS system too)

2004-06-13 Thread Stephen Kent
Harald,
You are right that the scheme I proposed inn 1422 did not succeed, 
and today I would not suggest it. But, the reason I would not suggest 
it today is because I have come to believe that one should adopt CAs 
that are authoritative for the certs they issue, not trusted third 
parties. The DNS root is an example of such a CA, whereas RSA 
(proposed as the IPRA) was not.  If we deploy DNSSEC in a full, top 
down fashion, the effect is the same as what Kevin is suggesting, 
expect that we would be using a standard cert format that is employed 
by many security protocols.

steve
___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


UA 893 compensation

2004-03-05 Thread Stephen Kent
Thius is a note for all of the folks who flew on UA 893 on Friday, 
2/27, with the unexpected 24 hour delay via Seattle.

I just got off the phone with UA Customer Service (not Mileage Plus). 
They offered a 5K mile good will compensation for our 
inconvenience.  These miles will not count toward Premiere 
Qualification. You could also request some 500 mile domestic upgrades 
as an alternative. They were firm in refusing to offer qualifying 
miles, because they insist that such miles can be offered only as a 
result of paid travel, and we already received miles consistent with 
out paid travel.

Steve



Re: UA 893 compensation

2004-03-05 Thread Stephen Kent
At 12:40 -0500 3/5/04, John C Klensin wrote:
--On Friday, March 05, 2004 11:26 -0500 Stephen Kent
[EMAIL PROTECTED] wrote:
 Thius is a note for all of the folks who flew on UA 893 on
 Friday, 2/27, with the unexpected 24 hour delay via Seattle.
 I just got off the phone with UA Customer Service (not Mileage
 Plus). They offered a 5K mile good will compensation for our
 inconvenience.  These miles will not count toward Premiere
 Qualification. You could also request some 500 mile domestic
 upgrades as an alternative. They were firm in refusing to
 offer qualifying miles, because they insist that such miles
 can be offered only as a result of paid travel, and we already
 received miles consistent with out paid travel.
Steve, and others,

Negotiating about this sort of thing with the airline many of us
love to hate may not be productive or worth the trouble, but,
suppose that, instead of flying you nearly halfway to Taiwan
from SFO and then bringing you back to Seattle and starting
over, you had, e.g., originated in Seattle (assuming they have
such a flight at this time of year), they had equipment problems
and then rebooked you onto a Seattle-LAX flight and then an
LAX-Taipei flight.  Under that situation, they award
(occasionally after some yelling and screaming, but usually
without it) the SEA-LAX-Taipei mileage (you actually took those
flights) and not the shorter SEA-Taipei mileage.  And those
miles count for elite qualification.  This situation is
different, but the difference has to do with the subtle issues
of  flight numbers and segments, not with paid miles flown.
Good luck.
 john
John,

Good suggestion for a line of argument.

Steve



Re: TCP over IPSec ESP??

2004-02-23 Thread Stephen Kent
At 2:51 -0800 2/21/04, chintan sheth wrote:
Hi,

Is there anything called TCP over IPSec ESP? I believe
it should be IPSec ESP over TCP. Please clarify. Also,
point me to the relevant RFC #.
Thanks,

Chintan
TCP can be encapsulated by ESP.

The correct spelling for the protocol is IPsec, not IPSec.

ESP is not generaly run over TCP.

RFC 2402 describes the use of ESP.

Steve Kent

author of 2401, 2402, 2406, ...



Re: Hi

2004-01-19 Thread kent
On Mon, Jan 19, 2004 at 10:53:18AM -0500, Noel Chiappa wrote:
  From: John Stracke [EMAIL PROTECTED]
 
  I didn't write that; the return address was faked.
 
 So much for mailing list security by only allowing posts from subscribers.

Security is not a binary condition.  

 This virus/worm is actually mildly interested in the way it operates. I'm
 seeing lots of email from people with whom I would have corresponded long ago.
 So it's probably mining web pages for old email, and using the addresses it
 finds in the headers as source/dest pairs.

Perhaps, but that would be pretty impressive for a 16K executable --
maybe it downloads a second stage  -- there are a bunch of builtin urls,
eg:

http://www.elrasshop.de/1.php
http://www.it-msc.de/1.php
http://www.getyourfree.net/1.php
http://www.dmdesign.de/1.php
http://64.176.228.13/1.php
http://www.leonzernitsky.com/1.php
http://216.98.136.248/1.php
http://216.98.134.247/1.php
http://www.cdromca.com/1.php
http://www.kunst-in-templin.de/1.php
http://vipweb.ru/1.php
http://antol-co.ru/1.php
http://www.bags-dostavka.mags.ru/1.php
http://www.5x12.ru/1.php
http://bose-audio.net/1.php
http://www.sttngdata.de/1.php
http://wh9.tu-dresden.de/1.php
http://www.micronuke.net/1.php
http://www.stadthagen.org/1.php
etc


-- 
Kent Crispin 
[EMAIL PROTECTED]p: +1 310 823 9358  f: +1 310 823 8649
[EMAIL PROTECTED] SIP: [EMAIL PROTECTED]




Re: Visa for South Korea

2003-12-30 Thread Stephen Kent
At 11:34 -0500 12/30/03, Ken Hornstein wrote:
 From my reading of the Korean Embassy web page, it seems that US residents
will require a visa to attend the Seoul IETF.  I'm wondering if anyone
has gotten a visa to enter South Korea before, and if so, can they provide
any tips on the visa process?  (The only requirement that looks like it
will be a pain is the letter from your employer).
--Ken
I attended a technical meeting in Seoul in the summer of 2003, and 
did not require a visa. We should get a reading on this from our 
hosts.

Steve



Re: Hashing spam

2003-12-18 Thread kent
On Thu, Dec 18, 2003 at 03:39:58PM -0500, Keith Moore wrote:
 The problem with this analysis is that it assigns greater value to 
 contributions from subscribers than to contributions from 
 non-subscribers.  But often the failure to accept clues from 
 outsiders causes working groups to do harm

I don't believe this is true, for any normal definition of often.  
Occasionally might be believable.

  - and filtering messages 
 in the #2 category increases this tendency.

One could just as easily argue that such filtering would decrease the
tendency, because people would modify their behavior to subscribe to
groups they cared about.  Also, one could just as easily argue that
working groups are just as likely to be harmed by distracting comments
from outsiders... 

 The occasional rejection 
 of #2 messages can be very harmful.

Seems more likely to me that the amount of harm would be lost in the
normal noise of ietf processes.

Regards
Kent

 On Dec 18, 2003, at 3:01 PM, Vernon Schryver wrote:
 
   1. on-topic messages from subscribers
   2. on-topic messages from non-subscribers
   3. noise from subscribers
   4. noise from non-subscribers
   5. pure spam such as advertisements for loan sharks
 
 In this list, only #1 is clearly good. It is good to avoid rejecting
 #2, but there is surely no harm in sometimes delaying #2.  If the
 senders of any rejected or false positive #2 received an informative
 non-delivery report so that they could retransmit, what would be the 
 harm?
 
 SpamAssassin is reported to be better than 60% accurate.  #2 is surely
 rare compared to #1.  Thus, as long as SpamAssassin white-lists all
 subscribers, there would be no harm in the occasional rejection of #2.

-- 
Kent Crispin   Be good, and you will be
[EMAIL PROTECTED],[EMAIL PROTECTED]lonesome.
p: +1 310 823 9358  f: +1 310 823 8649   -- Mark Twain
SIP: [EMAIL PROTECTED]




Re: PKIs and trust

2003-12-15 Thread Stephen Kent
Keith,

I've authored several papers that capture what I see as the essence 
of your characterizations, in a simple form. The central notion is 
that most of these relationships are NOT about trust, but rather 
about authority. if one views them in this fashion, then  it becomes 
apparent that the entities that are authoritative for identification 
and authorization assertions should be CAs, and we, as individuals 
with many distinct identities, should expect to hold many certs, each 
corresponding to one identity. This is what happens in the physical 
world with most physical credentials: passports, frequent traveller 
cards, etc.

Steve



Re: PKIs and trust

2003-12-15 Thread Stephen Kent
At 4:31 +0900 12/16/03, Masataka Ohta wrote:
Stephen Kent;

I've authored several papers that capture what I see as the essence 
of your characterizations, in a simple form. The central notion is 
that most of these relationships are NOT about trust, but rather 
about authority. if one views them in this fashion, then  it 
becomes apparent that the entities that are authoritative for 
identification and authorization assertions should be CAs, and we, 
as individuals with many distinct identities, should expect to hold 
many certs, each corresponding to one identity.
The problem for such PKI is that, if we have certs based on
existing trust (e.g. I trust some organization have an authority
to issue passports) relationships, we can exchange shared secret
using the relationships that we don't need any public keys.
In principle, yes, but in practice it is preferable to use public 
keys for a variety of security reasons, not to mention the existence 
of a lot of software that can make use of certs and public keys.


This is what happens in the physical world with most physical 
credentials: passports, frequent traveller cards, etc.
Our trust relationships in these cases are so strong that we
can be delivered not only PINs (shared secret) but also physical
credentials.
Yes, but it is cheaper to issue credentials in the form of certs and 
avoid postage and related physical credential costs. Also, PINs are 
meant to be remembered by users and thus are mire vulnerable to 
guessing than key pairs. So we have to put into place attack 
monitoring and response schemes, e.g., locking down an account after 
N bad login attempts, which creates DoS opportunities! So there are 
many reasons to prefer PKI here, although there are downsides too.

Then, who need public key cryptography?

Thus, many expect thatm once a PKI is formed, it can create any
trust relationship for anything.
We know a PKI does not.
agreed.

The next question is, does a, two or millions of PKIs worth having?

I don't think they do.

I don't know how many we need. But, when I look in my travel bag I 
see about 30+ paper and plastic credentials, all of which could be 
turned into certs under the right circumstances, without creating new 
trusted organizations, and with the benefit of greater security and 
less bulk (bits are thin and light weight!).

Steve



Re: PKIs and trust

2003-12-15 Thread Stephen Kent
At 6:08 +0900 12/16/03, Masataka Ohta wrote:
Stephen Kent;

I'm having a feeling that you call a set of software/hardware
to handle certs a PKI.
no, there is a lot more to a PKI than hardware and software.


The problem for such PKI is that, if we have certs based on
existing trust (e.g. I trust some organization have an authority
to issue passports) relationships, we can exchange shared secret
using the relationships that we don't need any public keys.

In principle, yes, but in practice it is preferable to use public 
keys for a variety of security reasons,
In practice, I see no security reason not to use shared key
cryptography. See below about the practice of the cases
you choose (passports, frequent traveller cards, etc.)
not to mention the existence of a lot of software that can make use 
of certs and public keys.
I'm afraid you are saying we should have PKI because we have PKI.
why do we use browsers to access many databases where other 
mechanisms might be more appropriate? because we all have free 
browsers, users and developers are comfortable with the paradigm, ...


This is what happens in the physical world with most physical 
credentials: passports, frequent traveller cards, etc.


Our trust relationships in these cases are so strong that we
can be delivered not only PINs (shared secret) but also physical
credentials.


Yes, but it is cheaper to issue credentials in the form of certs 
and avoid postage and related physical credential costs.
In all (passports and frequent traveller cards) cases, it is
required that applicants physically contact authorities.
True for an initial contact for a passport, not for renewal. When I 
referred to frequent traveller cards I had in mind the 
airline/hotel/car rental frequent traveller programs to which I 
belong, not credentials that get me through security with less 
examination.  (Although my frequent flyer cards DO get me into 
shorter lines in many airports.)

In Japan, and maybe in other countries, use of material mail is
inevitable to get passport, because it is the way to confirm the
addresses of applicant.
The address of an applicant is not even printed on a US passport. It 
is not part of what the U.S. Department of State attests to when 
issuing a passport.

One can pick up frequent travellor cards, at least paper ones, at
airport.
I can get my form of frequent traveller card via a web interaction, 
with no physical presence!


Also, PINs are meant to be remembered by users and thus are mire 
vulnerable to guessing than key pairs. So we have to put into place 
attack monitoring and response schemes, e.g., locking down an 
account after N bad login attempts, which creates DoS 
opportunities! So there are many reasons to prefer PKI here, 
although there are downsides too.
Here, we are talking about physical credentials optionally accompanied
by PINs. So, long PINs may be securely stored in the physical
credentials (maybe with additional short PINs to activate the physical
credentials, which is also the case for devices storing secret keys of
public key cryptography). DoS is to steal the physical credentials.
I think we are talking about different use models. If I have a 
frequent flyer account with web access, and if someone tries to break 
in by guessing my PIN, the airline will have to shut down the account 
after some small number of tries, to prevent an effective guessing 
attack. This denies ME access, and it imposes costs for the airline, 
because I may have to make a toll free call to someone to cause my 
account to be reactivated.  That is a DoS attack that could be 
avoided if we used crypto keys for auth.

The next question is, does a, two or millions of PKIs worth having?

I don't think they do.

I don't know how many we need. But, when I look in my travel bag I 
see about 30+ paper and plastic credentials, all of which could be 
turned into certs under the right circumstances, without creating 
new trusted organizations,
I think we can, at least, agree that we need no new trusted
organizations or commercial CAs.
agreed!


and with the benefit of greater security and less bulk (bits are 
thin and light weight!).
That you have paper and plastic credentials means that you don't
need much security.
not really. I rarely uses most of those credentials today. They are 
largely replaced by web accesses where knowledge of the account 
number and a PIN provides the authentication that used to be inferred 
by physical possession of the card.

That you have an IC card containing 30+ secret keys activated with
a short PIN does not mean so much security. How do you think about
an IC card erases all the secret information after N bad PINs, which
creates DoS opportunities?
I am not too worried about physical security for a crypto hardware 
token, because I am careful to not lose such tokens, just like I am, 
careful to not lose physical (paper/plastic) cards today. The 
advantage to using crypto for authentication is that the keys are 
longer

Re: www.isoc.org unreachable when ECN is used

2003-12-12 Thread kent
On Fri, Dec 12, 2003 at 06:23:48AM +0100, Anthony G. Atkielski wrote:
 But since ISOC's firewalls have not been updated, you won't be able to
 get to their site from Linux.

Nonsense.  I'm running Linux, several versions.  I can get to the ISOC
site from all of them.

-- 
Kent Crispin   Be good, and you will be
[EMAIL PROTECTED],[EMAIL PROTECTED]lonesome.
p: +1 310 823 9358  f: +1 310 823 8649   -- Mark Twain
SIP: [EMAIL PROTECTED]




RE: ITU takes over?

2003-12-12 Thread Stephen Kent
At 8:39 -0800 12/12/03, Tony Hain wrote:
vinton g. cerf wrote:
 ...
 Unfortunately, the discussion has tended to center on ICANN as the only
 really visible example of an organization attempting to develop policy
 (which is being treated as synonymous with governance
To further your point, an area completely outside of ICANN's purview, yet an
area requiring governance is PKI. We are at the point where deployment of a
PKI has moved beyond technical issues, becoming almost completely the policy
 politics of trust. Until the politicians broker the trust relationships,
there is nothing technology can do.
Tony
Not everyone agrees that PKIs are all about trust :-). Still, even if 
one establishes PKIs based on who is authoritative for the data in 
certs, as I always suggest, I agree that the major remaining problems 
are business and political, not technical.

Steve



  1   2   >