Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-04-13 Thread Stefan Santesson

On 4/13/13 2:53 AM, Henry B. Hotz h...@jpl.nasa.gov wrote:

You've just said that there are only two valid responses from an OCSP
server which has access to a white list.  In other words such a server
inherently cannot provide any information about the actual revocation
status of a cert, and its responses cannot distinguish between a
lost/forged/whatever cert and a known-bad (actually revoked) cert.

I realize it's unfair of me to take this quote out of context.  Apologies,
but I want to focus on it. (Stole your words :))


No that is NOT what I say.

An extension may differentiate which serial number that results in a
revoked response, that is actually issued and revoked, or if there is
any other particular reason for responding revoked.
In my universe a syntactically valid serial number can only be good,
revoked or non-issued. But expressing this in general terms anyway.

So with the help of extensions, a responder can provide both white and
black list data.

The legacy client will just look at the status and act accordingly. The
extension aware client and a later audit can use the extensions to further
determine what happened.

/Stefan




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-04-13 Thread Stefan Santesson


On 4/13/13 6:51 PM, Piyush Jain piy...@ditenity.com wrote:


 An extension may differentiate which serial number that results in a
 revoked response, that is actually issued and revoked, or if there is
any
 other particular reason for responding revoked.
 In my universe a syntactically valid serial number can only be good,
revoked
 or non-issued. But expressing this in general terms anyway.

[Piyush]  From an RP's perspective finding status of serial numbers serves
no purpose unless they can associate that serial number with a
certificate.

Absolutely, that is the client's perspective of this.

When an OCSP client extracts the serial number from a certificate and
sends
it the responder to determine the status, it is acting under a very
important assumption - the CA has issued that certificate and that it has
issued only one certificate with that serial.

Absolutely

If you say that this assumption is invalid, your trichotomy of serial
status
is not mutually exclusive any more. Same serial can now be associated
with a
good, revoked and non-issued status.

I didn't say that. I said it can only be good, revoked or non-issued

Notice the difference? My sentence contains the word or, yours the word
and.

These are the three possible states of a serial number. It must always be
in ONE of these states.

And also the client cannot be sure if
the CA delegated responder's certificate is good or non-issued. This
renders
OCSP completely useless.

I did not talk about responder certificates.



 
 So with the help of extensions, a responder can provide both white and
black
 list data.

[Piyush] May be you are right. But I doubt that you want a discussion on
feasibility/security implications of this, otherwise it would've been a
stated goal of 2560bis and would have been captured in the abstract.

If new extensions are defined, then the RFC defining these extensions will
discuss relevant security implications related to those extensions.

I tried to think ahead but I cannot figure out a way to communicate the
white listed status of a CA delegated responder to the client without
avoiding circular logic.

I can.

The good response provides by default a minimum level of positive
confirmation.
But it is also make clear that an extensions allows good to say more, as
long as it fits within that default statement of good.

A white list falls within that basic statement.
That is, the serial numbers that will return good in a white-list
scenario would also return good if the responder provided default OCSP
status information.
White listing will simply reduce the number of good status to occasions
where it represent a known existing certificate.




I completely fail to see why revoked for non-issued is a pre-requisite for
future extensions that can be used to provide white-list data (if it is
even
possible under CA delegated responder trust model).

I'll try again. It is only possible to reduce the good status responses
to white listed certificates if another suitable status can be return for
any other requested serial number.
That is, the combined set of revoked and non-issued serial numbers that
does not belong in the while list.

Responding unknown may not be desirable, as it is likely to cause the
client to try another source.

With the update of RFC2560bis it is now being made clear (in the
definition of revoked), that it is conformant to respond revoked to
both revoked and non-issued certificates, while before, that could be
questioned as a stretch of semantics.

So yes indeed, the updates makes it easy to take the next step into a
white-list OCSP responder by adding a suitable extension.
Of course, only those understanding the new extension will know that it is
a white list response.

/Stefan




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-04-13 Thread Stefan Santesson


On 4/13/13 8:56 PM, Piyush Jain piy...@ditenity.com wrote:

 [Piyush]  From an RP's perspective finding status of serial numbers
 serves no purpose unless they can associate that serial number with a
 certificate.
 
 Absolutely, that is the client's perspective of this.

Great. We agree
 
 When an OCSP client extracts the serial number from a certificate and
 sends it the responder to determine the status, it is acting under a
 very important assumption - the CA has issued that certificate and that
 it has issued only one certificate with that serial.
 
 Absolutely
Great. We agree.  
Let me reiterate -OCSP client extracts the serial number assuming that the
CA issued the certificate and issued only one certificate with that serial
number. So why do we need the responder to return non-issued for the same
certificate?

Because of your lack of fantasy :)
Shift your focus from the client to the server.

A client have control over never asking for a non-issued certificate,
unless the CA is broken.
The server have not.

A bad, broken or malicious client may for what ever reason it may have,
send a request for a non-issued certificate.
If that happens, the protocol should allows ways to handle that.

Just imagine a such benign situation of a client with a bug, that sends
the wrong serial number if the most significant bit is set.
For half of it's requests it sends requests for completely non existent
certs.

Perhaps not likely, but possible.

The chance of discovering this error increase significantly if the
non-issued serial number requests returns revoked instead of good.

This just on top of the case of a broken CA.



 
 If you say that this assumption is invalid, your trichotomy of serial
 status is not mutually exclusive any more. Same serial can now be
 associated with a good, revoked and non-issued status.
 
 I didn't say that. I said it can only be good, revoked or non-issued
 Notice the difference? My sentence contains the word or, yours the
word
 and.

I stand by my words :).

Feel free. But saying a million times that a black stone is white, does
not change its colour.


 These are the three possible states of a serial number. It must always
be
in
 ONE of these states.
As long as you assume that a certificate signed by the CA cannot be
non-issued i.e. CA knows what certificates it issued.
If you break that assumption, you have to deal with the possibility of
different certificates with same serial numbers (after all certificates
are
getting issued without CA's knowledge). This implies that the same serial
number can be associated with a good certificate and a non-issued
certificate.

Let me frame it in a different way. If you get a good response from a
responder that issues revoked for non-issued, can you be sure that the
certificate you are checking is not a non-issued certificate?


No, to take it to that level, you need a new white list extension.
You see this from the wrong angle.

This change is not intended to provide the client indisputable knowledge
about whether a certificate is non-issued or not.
It is intended to give the responder a means of causing the client to
reject rather than accept something that is known not to be good.



 
 And also the client cannot be sure if
 the CA delegated responder's certificate is good or non-issued. This
 renders OCSP completely useless.
 
 I did not talk about responder certificates.

You did not. But that does not make my statement any less relevant or
incorrect and strengthens Henry's point about this spec achieving what it
is
trying to do.



It makes your statement totally irrelevant until the point where you can
demonstrate its relevance.

/Stefan






Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-04-11 Thread Stefan Santesson
On 4/12/13 1:31 AM, Henry B. Hotz h...@jpl.nasa.gov wrote:

What I would find helpful, and what I think some people really would
like, is for OCSP to be able to provide white-list information in
addition to the previous black-list information.  When I read through
2560bis, I could not tell if there was an extension which would allow an
RP to tell if good actually meant a cert was on the white list (and to
know the responder has the white list), or merely not on the black list.
(Yes, I'm repeating myself.  Am I making more sense, or just wasting
everyone's time?)

What we have done is to roll out the red carpet and made it possible for
you to do that.

- The only thing you need to do now is to define a white-list extension.


To put it simply. Given how OCSP is designed, the only way to allow good
to represent a white-list, is if revoked can be returned for everything
else.
Everything else in this context means every other revoked or non-issued
certificate serial number under that CA.


With RFC 2560 that is not possible in a clean way.
With this new extension in RFC 2560bis, it is now possible.




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-04-10 Thread Stefan Santesson
Nothing has changed in this regard.

The good response is pretty clear that it by default provides information
that the cert is not on a black-list (is not know to be revoked).
However, it is also made clear that extensions may be used to expand this
default information about the status.

This is how it always have been, and still is in RFC 256bis.

The revoked response simply means that the certificate is revoked.
Combined with the new extension, it MAY also be used for non-issued certs.

It's really as simple as that.

It is only the discussion on this list that is confusing and that has
managed turn something really simple into a complicated debate.

/Stefan


On 4/8/13 9:35 PM, Henry B. Hotz h...@jpl.nasa.gov wrote:

Actually, what I get from this and all the other discussions is that it's
unclear if the updated OCSP satisfies the suitability for the intended
purpose test.  Or at least it fails the KISS principle w.r.t. that.

Rephrasing:  an OCSP client should be able to tell from an OCSP response
if:  a) the subject cert is on the CAs white list, b) the subject cert is
on the CAs black list, c) the subject cert is not on either list, or
finally d) the OCSP server is obsolete, and doesn't support making those
distinctions.  It's not trivial to see how to parse 2560bis responses
w.r.t. those cases, therefore it's highly likely that computational
complexity will prevent us from doing so.  Even if that's not actually
the case, then implementor misunderstandings will prevent us from doing
so in practice.

Therefore I vote against moving this draft forward.  I just don't see the
point.

If someone were to write an informational RFC which explained how to
determine which of the 4 cases an OCSP response fell into, AND if said
RFC also convinced me that the decision process was easy to understand,
THEN I would change my vote.  Obviously an appendix in 2560bis would
serve just as well.
___
pkix mailing list
p...@ietf.org
https://www.ietf.org/mailman/listinfo/pkix




Re: [pkix] Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-04-01 Thread Stefan Santesson
On 3/29/13 5:17 PM, Piyush Jain piy...@ditenity.com wrote:

' revoked status is still optional in this context in order to maintain
backwards compatibility with deployments of RFC 2560.'

I fail to understand this statement about backward compatibility.
How does revoked being optional/required breaks backward compatibility?
The only reason cited in the WG discussions to use revoked for
not-issued
was that any other approach would break backward compatibility with legacy
clients. And now the draft says that revoked is optional because making it
required won't be backward compatible.

Yes. Making it required would prohibit other valid ways to respond to this
situation that is allowed by RFC 2560 and RFC 5019.
Such as responding good or responding with unauthorized error.


And it gives the impression that best course of action for 2560bis
responders is to start issuing revoked for not-issued, which is far from
the originally stated goal to provide a way for CAs to be able to return
revoked for such serial numbers.

The latter is what optional means.

/Stefan




Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-03-28 Thread Stefan Santesson
I could.

My worry is just that this is such a contentious subject and it took us x
hundreds of emails to reach this state, that if I add more explanations,
people will start disagreeing with it and that we end up in a long debate
on how to correctly express this.

Is this important enough to do that?

/Stefan


On 3/27/13 3:30 PM, Black, David david.bl...@emc.com wrote:

Hi Stefan,

 Does this answer your question?

Yes, please add some of that explanation to the next version of the draft
;-).
Coverage of existing responder behavior/limitations (important running
code
concerns, IMHO) and alternatives to using revoked (have a number of
tools
to prevent the client from accepting a bad certificate) seem particularly
relevant.

Thanks,
--David

 -Original Message-
 From: Stefan Santesson [mailto:ste...@aaa-sec.com]
 Sent: Wednesday, March 27, 2013 7:44 AM
 To: Black, David; s...@aaa-sec.com; mmy...@fastq.com; ambar...@gmail.com;
 slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
 Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
 Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
 
 Hi David,
 
 Yes I missed to respond to that aspect.
 
 This is a bit complicated, but we have a large legacy to take into
account
 where some responders implements just RFC 2560, while some deliver
 pre-generated responses according to RFC 5019 (Light-weight OCSP). LW
 responders are not capable of producing a signed response at the time of
 responding and in case such responder finds a request for a certificate
 where no pre-produced response exists, it will reply with an unsigned
 error response unauthorized, which also is a legitimate way to
respond.
 So the actual OCSP responder may actually know that the certificate was
 never issued, but since it delivers pre-produced responder through a
CDN,
 it can not provide a revoked response in real time.
 
 So the major aim with the current writing is to declare that the revoked
 response is a MAY because there are other valid alternatives.
 
 We also want to avoid putting down a SHOULD respond revoked if a
 certificate is known to be not-issued, because that would require us to
 define what known to be non-issued actually means. And that could be
 quite tricky as OCSP responders by no means are required to have this
 knowledge.
 
 The OCSP responder simply have a number of tools to prevent the client
 from accepting a bad certificate.
 This update of OCSP simply allows responders to use the revoked
response
 as a preventive measure, without mandating it.
 
 This is also related to activities in the CA Browser Forum where they
put
 down requirements on responders complying with CAB rules to not respond
 good to certificates that were never issued.
 With this update in OCSP, they can now mandate in their policies both
the
 fact that their responders MUST know if a certificate was never issued
and
 MUST respond revoked.
 
 So we allow other communities to raise the bar even if the base standard
 defines the response as optional.
 
 In theory we could possibly say that responding revoked is optional, but
 if you choose between revoked and unknown then you SHOULD favour revoked
 over unknown. But such nested requirements just feels bad and impossible
 to test compliance against. I'd much rather just leave it optional. I
 think the Note gives a clear recommendation on this and the rationale
 without spelling it out as a requirement.
 
 Does this answer your question?
 
 
 On 3/27/13 12:51 AM, Black, David david.bl...@emc.com wrote:
 
 Hi Stefan,
 
 This looks good - thank you for the prompt response.
 
 It looks like my speculation on item [1] was wrong, so could you
respond
 to the question below, please?:
 
  [1] Section 2.2:
  
   NOTE: The revoked state for known non-issued certificate serial
   numbers is allowed in order to reduce the risk of relying
   parties using CRLs as a fall back mechanism, which would be
   considerably higher if an unknown response was returned.
  
  Given this explanation, I'm surprised that the use of revoked
 instead of
  unknown for a known non-issued certificate is a MAY requirement
and
  not a SHOULD requirement.  Why is that the case?
 
 --
 
 Beyond that, the proposed actions (or proposed non-actions) on items
 [2]-[5]
 are fine with me, Sean's taken care of the author permissions item from
 idnits, and I assume someone has or will check the ASN.1 .
 
 Thanks,
 --David
 
  -Original Message-
  From: Stefan Santesson [mailto:ste...@aaa-sec.com]
  Sent: Monday, March 25, 2013 10:21 PM
  To: Black, David; s...@aaa-sec.com; mmy...@fastq.com;
ambar...@gmail.com;
  slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
  Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
  Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
 
  Hi David,
 
  Thanks for the review.
  My reply in line.
 
  On 3/26/13 1:25 AM, Black, David david.bl...@emc.com wrote:
 
  Authors

Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-03-28 Thread Stefan Santesson
I have given this a go by expanding the note as follows:

NOTE: The revoked state for known non-issued certificate serial
 numbers is allowed in order to reduce the risk of relying
 parties using CRLs as a fall back mechanism, which would be
 considerably higher if an unknown response was returned. The
 revoked status is still optional in this context in order to
 maintain backwards compatibility with deployments of RFC 2560.
 For example, the responder may not have any knowledge about
 whether a requested serial number has been assigned to any
 issued certificate, or the responder may provide pre produced
 responses in accordance with RFC 5019 and, for that reason, is
 not capable of providing a signed response for all non-issued
 certificate serial numbers.


Does this solve you issue.
I think this is as far as dare to go without risking a heated debate.

/Stefan


On 3/27/13 5:08 PM, Black, David david.bl...@emc.com wrote:

Stefan,

 Is this important enough to do that?

IMHO, yes - the running code aspects of existing responder
behavior/limitations
are definitely important enough for an RFC like this that revises a
protocol spec,
and the alternatives to revoked feel like an important complement to
those
aspects (discussion what to do instead when responder
behavior/limitations are
encountered).

I appreciate the level of work that may be involved in capturing this, as
I've had my share of contentious discussions in WGs that I chair - FWIW,
I'm currently chairing my 4th and 5th WGs.  OTOH, when a WG has put that
much
time/effort into reaching a (compromise) decision, it really is valuable
to record why the decision was reached to avoid recovering that ground
in the future and (specific to this situation) to give implementers some
more context/information on how the protocol is likely to work in
practice.

Thanks,
--David

 -Original Message-
 From: Stefan Santesson [mailto:ste...@aaa-sec.com]
 Sent: Wednesday, March 27, 2013 11:38 AM
 To: Black, David; s...@aaa-sec.com; mmy...@fastq.com; ambar...@gmail.com;
 slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
 Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
 Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

 I could.

 My worry is just that this is such a contentious subject and it took us
x
 hundreds of emails to reach this state, that if I add more explanations,
 people will start disagreeing with it and that we end up in a long
debate
 on how to correctly express this.

 Is this important enough to do that?

 /Stefan


 On 3/27/13 3:30 PM, Black, David david.bl...@emc.com wrote:

 Hi Stefan,
 
  Does this answer your question?
 
 Yes, please add some of that explanation to the next version of the
draft
 ;-).
 Coverage of existing responder behavior/limitations (important running
 code
 concerns, IMHO) and alternatives to using revoked (have a number of
 tools
 to prevent the client from accepting a bad certificate) seem
particularly
 relevant.
 
 Thanks,
 --David
 
  -Original Message-
  From: Stefan Santesson [mailto:ste...@aaa-sec.com]
  Sent: Wednesday, March 27, 2013 7:44 AM
  To: Black, David; s...@aaa-sec.com; mmy...@fastq.com;
ambar...@gmail.com;
  slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
  Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
  Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
 
  Hi David,
 
  Yes I missed to respond to that aspect.
 
  This is a bit complicated, but we have a large legacy to take into
 account
  where some responders implements just RFC 2560, while some deliver
  pre-generated responses according to RFC 5019 (Light-weight OCSP). LW
  responders are not capable of producing a signed response at the
time of
  responding and in case such responder finds a request for a
certificate
  where no pre-produced response exists, it will reply with an unsigned
  error response unauthorized, which also is a legitimate way to
 respond.
  So the actual OCSP responder may actually know that the certificate
was
  never issued, but since it delivers pre-produced responder through a
 CDN,
  it can not provide a revoked response in real time.
 
  So the major aim with the current writing is to declare that the
revoked
  response is a MAY because there are other valid alternatives.
 
  We also want to avoid putting down a SHOULD respond revoked if a
  certificate is known to be not-issued, because that would require us
to
  define what known to be non-issued actually means. And that could
be
  quite tricky as OCSP responders by no means are required to have this
  knowledge.
 
  The OCSP responder simply have a number of tools to prevent the
client
  from accepting a bad certificate.
  This update of OCSP simply allows responders to use the revoked
 response
  as a preventive measure, without mandating it.
 
  This is also related to activities in the CA Browser Forum

Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-03-28 Thread Stefan Santesson
Great,

I will issue an update shortly.

/Stefan

On 3/28/13 3:51 PM, Black, David david.bl...@emc.com wrote:

 Does this solve you issue.
 I think this is as far as dare to go without risking a heated debate.

Yes, that suffices for me - it provides a cogent explanation of why
revoked is optional, and the existing text on CRLs as a fallback
mechanism suffices to illuminate a likely consequence of not using
revoked.

Thank you,
--David

 -Original Message-
 From: Carlisle Adams [mailto:cad...@eecs.uottawa.ca]
 Sent: Thursday, March 28, 2013 9:57 AM
 To: 'Stefan Santesson'; Black, David; s...@aaa-sec.com; mmy...@fastq.com;
 ambar...@gmail.com; slava.galpe...@gmail.com; gen-...@ietf.org
 Cc: p...@ietf.org; 'Sean Turner'; ietf@ietf.org
 Subject: RE: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

 Hi,

 This wording sounds fine to me.  Thanks Stefan!

 Carlisle.


 -Original Message-
 From: Stefan Santesson [mailto:ste...@aaa-sec.com]
 Sent: March-28-13 6:34 AM
 To: Black, David; s...@aaa-sec.com; mmy...@fastq.com; ambar...@gmail.com;
 slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
 Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
 Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

 I have given this a go by expanding the note as follows:

 NOTE: The revoked state for known non-issued certificate serial
  numbers is allowed in order to reduce the risk of relying
  parties using CRLs as a fall back mechanism, which would be
  considerably higher if an unknown response was returned. The
  revoked status is still optional in this context in order to
  maintain backwards compatibility with deployments of RFC 2560.
  For example, the responder may not have any knowledge about
  whether a requested serial number has been assigned to any
  issued certificate, or the responder may provide pre produced
  responses in accordance with RFC 5019 and, for that reason, is
  not capable of providing a signed response for all non-issued
  certificate serial numbers.


 Does this solve you issue.
 I think this is as far as dare to go without risking a heated debate.

 /Stefan


 On 3/27/13 5:08 PM, Black, David david.bl...@emc.com wrote:

 Stefan,
 
  Is this important enough to do that?
 
 IMHO, yes - the running code aspects of existing responder
 behavior/limitations are definitely important enough for an RFC like
 this that revises a protocol spec, and the alternatives to revoked
 feel like an important complement to those aspects (discussion what to
 do instead when responder behavior/limitations are encountered).
 
 I appreciate the level of work that may be involved in capturing this,
 as I've had my share of contentious discussions in WGs that I chair -
 FWIW, I'm currently chairing my 4th and 5th WGs.  OTOH, when a WG has
 put that much time/effort into reaching a (compromise) decision, it
 really is valuable to record why the decision was reached to avoid
 recovering that ground in the future and (specific to this situation)
 to give implementers some more context/information on how the protocol
 is likely to work in practice.
 
 Thanks,
 --David
 
  -Original Message-
  From: Stefan Santesson [mailto:ste...@aaa-sec.com]
  Sent: Wednesday, March 27, 2013 11:38 AM
  To: Black, David; s...@aaa-sec.com; mmy...@fastq.com;
  ambar...@gmail.com; slava.galpe...@gmail.com; cad...@eecs.uottawa.ca;
  gen-...@ietf.org
  Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
  Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
 
  I could.
 
  My worry is just that this is such a contentious subject and it took
 us x  hundreds of emails to reach this state, that if I add more
 explanations,  people will start disagreeing with it and that we end
 up in a long debate  on how to correctly express this.
 
  Is this important enough to do that?
 
  /Stefan
 
 
  On 3/27/13 3:30 PM, Black, David david.bl...@emc.com wrote:
 
  Hi Stefan,
  
   Does this answer your question?
  
  Yes, please add some of that explanation to the next version of the
 draft
  ;-).
  Coverage of existing responder behavior/limitations (important
  running code
  concerns, IMHO) and alternatives to using revoked (have a number
  of tools to prevent the client from accepting a bad certificate)
  seem
 particularly
  relevant.
  
  Thanks,
  --David
  
   -Original Message-
   From: Stefan Santesson [mailto:ste...@aaa-sec.com]
   Sent: Wednesday, March 27, 2013 7:44 AM
   To: Black, David; s...@aaa-sec.com; mmy...@fastq.com;
 ambar...@gmail.com;
   slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
   Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
   Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
  
   Hi David,
  
   Yes I missed to respond to that aspect.
  
   This is a bit complicated, but we have a large legacy to take into
  account  where some responders implements just RFC

Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-28 Thread Stefan Santesson
On 3/27/13 10:11 PM, Martin Rex m...@sap.com wrote:

It was the Security co-AD Russ Housley who indicated _early_ during
the discussion of that draft (2.5 years after it had been adopted
as a WG item) that he considered some of the suggested abuses of
existing error codes unacceptable

For the record. Russ comment was:

I find the use of unauthorized to indicate that a client cannot make a
multi-certificate request unacceptable.

Which is completely different from your raised concern.

/Stefan




Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-03-27 Thread Stefan Santesson
Hi David,

Yes I missed to respond to that aspect.

This is a bit complicated, but we have a large legacy to take into account
where some responders implements just RFC 2560, while some deliver
pre-generated responses according to RFC 5019 (Light-weight OCSP). LW
responders are not capable of producing a signed response at the time of
responding and in case such responder finds a request for a certificate
where no pre-produced response exists, it will reply with an unsigned
error response unauthorized, which also is a legitimate way to respond.
So the actual OCSP responder may actually know that the certificate was
never issued, but since it delivers pre-produced responder through a CDN,
it can not provide a revoked response in real time.

So the major aim with the current writing is to declare that the revoked
response is a MAY because there are other valid alternatives.

We also want to avoid putting down a SHOULD respond revoked if a
certificate is known to be not-issued, because that would require us to
define what known to be non-issued actually means. And that could be
quite tricky as OCSP responders by no means are required to have this
knowledge.

The OCSP responder simply have a number of tools to prevent the client
from accepting a bad certificate.
This update of OCSP simply allows responders to use the revoked response
as a preventive measure, without mandating it.

This is also related to activities in the CA Browser Forum where they put
down requirements on responders complying with CAB rules to not respond
good to certificates that were never issued.
With this update in OCSP, they can now mandate in their policies both the
fact that their responders MUST know if a certificate was never issued and
MUST respond revoked.

So we allow other communities to raise the bar even if the base standard
defines the response as optional.

In theory we could possibly say that responding revoked is optional, but
if you choose between revoked and unknown then you SHOULD favour revoked
over unknown. But such nested requirements just feels bad and impossible
to test compliance against. I'd much rather just leave it optional. I
think the Note gives a clear recommendation on this and the rationale
without spelling it out as a requirement.

Does this answer your question?


On 3/27/13 12:51 AM, Black, David david.bl...@emc.com wrote:

Hi Stefan,

This looks good - thank you for the prompt response.

It looks like my speculation on item [1] was wrong, so could you respond
to the question below, please?:

 [1] Section 2.2:
 
 NOTE: The revoked state for known non-issued certificate serial
 numbers is allowed in order to reduce the risk of relying
 parties using CRLs as a fall back mechanism, which would be
 considerably higher if an unknown response was returned.
 
 Given this explanation, I'm surprised that the use of revoked
instead of
 unknown for a known non-issued certificate is a MAY requirement and
 not a SHOULD requirement.  Why is that the case?

--

Beyond that, the proposed actions (or proposed non-actions) on items
[2]-[5]
are fine with me, Sean's taken care of the author permissions item from
idnits, and I assume someone has or will check the ASN.1 .

Thanks,
--David

 -Original Message-
 From: Stefan Santesson [mailto:ste...@aaa-sec.com]
 Sent: Monday, March 25, 2013 10:21 PM
 To: Black, David; s...@aaa-sec.com; mmy...@fastq.com; ambar...@gmail.com;
 slava.galpe...@gmail.com; cad...@eecs.uottawa.ca; gen-...@ietf.org
 Cc: p...@ietf.org; Sean Turner; ietf@ietf.org
 Subject: Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15
 
 Hi David,
 
 Thanks for the review.
 My reply in line.
 
 On 3/26/13 1:25 AM, Black, David david.bl...@emc.com wrote:
 
 Authors,
 
 I am the assigned Gen-ART reviewer for this draft. For background on
 Gen-ART, please
 see the FAQ at 
http://wiki.tools.ietf.org/area/gen/trac/wiki/GenArtfaq.
 
 Please resolve these comments along with any other Last Call comments
you
 may receive.
 
 Document: draft-ietf-pkix-rfc2560bis-15
 Reviewer: David L. Black
 Review Date: March 25, 2013
 IETF LC End Date: March 27, 2013
 
 Summary:
 This draft is on the right track but has open issues, described in the
 review.
 
 This draft updates the OCSP protocol for obtaining certificate status
 with some minor extensions.
 
 Because this is a bis draft, I reviewed the diffs against RFC 2560.
 
 I did not check the ASN.1.  I also did not see a writeup for this draft
 in the data tracker, and so will rely on the document shepherd to
 ensure that the ASN.1 has been checked when the writeup is prepared.
 
 I found five open issues, all of which are minor, plus one idnits item
 that is probably ok, but should be double-checked.
 
 Minor issues:
 
 [1] Section 2.2:
 
 NOTE: The revoked state for known non-issued certificate serial
 numbers is allowed in order to reduce the risk of relying
 parties using CRLs as a fall

Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-27 Thread Stefan Santesson
It is risky to assume that existing clients would work appropriately if
you send them a new never seen before error code.

I'm not willing to assume that unless a big pile of current implementers
assures me that this is the case.

/Stefan

On 3/27/13 3:14 PM, Martin Rex m...@sap.com wrote:

Stefan Santesson wrote:
 Martin Rex m...@sap.com wrote:
 
 Adding 3 more OCSPResponseStatus error codes {
no_authoritative_data(7),
 single_requests_only(8), unsupported_extension(8) } with well-defined
and
 conflict-free semantics to the existing enum would be perfectly
backwards
 compatible.
 
 Of course it is backwards compatible with the standard, but not with the
 installed base.
 
 What would happen to the installed base of clients if OCSP responders
 would change from current unauthorized to one of your new error codes?


Actually, please look at the I-D text again.  Even if the Servers would
change their response to a new error code for the I can not respond
authoritatively, it will be 100% backwards compatible with the clients.

That is at least what proposed change implies!

So lets assign new error codes and have Server change their response!



Backwards compatibility is relevant in the same fashion as
interoperability
-- interoperability among independent implementations as well as interop
between new implementations and the installed base.

Since the current change only conveys information, and does that in
an extremely ambiguous fashion, moving to a new error code is provably
NOT going to be any problem to interop,  The desired client behaviour
is completely unspecified, so *ANY* client behaviour will be acceptable.


Based on the current text, your claim must be bogus, that assignment of
new error codes for this purpose would be backwards incompatible.


What I can not yet completely rule out, though, is that there are some
currently unstated expectations / desired behaviour that you would
want to retain, and which is currently undocumented.  Should that be
the case, then it is paramount that any such expectation about desired
behaviour is ADDED to the document, in order to enable interop with
independent implementations.

It is conceivable that such desired/expected behaviour consists of some
kind of blacklisting that had been previously implemented for the
unauthorized(7) error code, and that the inventors of the rfc5019
profile (VeriSign and Microsoft) wanted to take advantage of.

Should that be the case, then I would really be curious what kind of
blacklisting that is that implementors are thinking of when they
express their concern a move to a newly assigned error code would
be backwards incompatible.  Is it about the caching of that responses
on clients?  Is there any negative response caching?  Are negative
responses cached per server or per (requested) certificate status?


-Martin




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-26 Thread Stefan Santesson
Unfortunately what you suggest would not be backwards compatible, and
can't be done within the present effort.

Responders using 5019 need to work in foreseeable time with legacy clients
that knows nothing about the update you suggest.
This group and the IETF made a decision when publishing 5019, that using
unauthorized in the present manner was a reasonable tradeoff.

I still think it is.

Unless you can convince the community of your course of action, I don't
see this happening.

/Stefan

On 3/26/13 6:28 AM, Martin Rex m...@sap.com wrote:

Stefan Santesson wrote:
 
 Whether we like it or not. This is the legacy.
 There is no way for a client to know whether the OCSP responder
implements
 RFC 2560 only or in combination with RFC 5019.
 
 So therefore, the update that was introduced in 5019 must be expected by
 all clients from all responders. Therefore it is included in 2560bis.
 
 What in your mind, would be a better solution?

Simply listing two mutually exclusive semantics for a single error code
looks like a very bad idea.  The correct solution would be to assign
a new error code for the semantics desirec by rfc5019 and fix rfc5019
and its implementiations.  And then mention the rfc5019 semantics for
the error code unauthorized in rfc2560bis as defective and deprecated.

rfc5019 is hardwired to SHA1 for the cert hashing algorithm, so
it will need to get updated at some point already.

The second issue is to explain that the errorcode response for can not
produce an authoritative response is not applicable to OCSPRequests
that contain more than one element in the requestList, because the
OCSP client will not know to which elements in the request the error
code response applies.

Actually, it would be sensible to define two more new error codes,
one that indicates that the OCSP server does not support multiple
requestList entries, and one that indicates to the client that the
server does not support one of the OCSP protocol extensions requested
by the client (multiple requests is not a protocol extension).


-Martin

 On 3/23/13 7:52 AM, Martin Rex m...@sap.com wrote:
 
 The IESG wrote:
  
  The IESG has received a request from the Public-Key Infrastructure
  (X.509) WG (pkix) to consider the following document:
  - 'X.509 Internet Public Key Infrastructure Online Certificate Status
 Protocol - OCSP'
draft-ietf-pkix-rfc2560bis-15.txt as Proposed Standard
  
  The IESG plans to make a decision in the next few weeks, and solicits
  final comments on this action. Please send substantive comments to
the
  ietf@ietf.org mailing lists by 2013-03-27.
 
 I'm having an issue with a subtle, backwards-incompatible change
 of the semantics of the exception case with the error code
 unauthorized, which tries to rewrite history 13 years into the
 without actually fitting the OCSP spec.
 
 It's about the second change from the introduction:
 
  o  Section 2.3 extends the use of the unauthorized error
  response, as specified in [RFC5019].
 
 While it is true that the error code abuse originally first appeared
 in rfc5019, the change was never declared as an update to rfc2560,
 nor filed as an errata to rfc2560.
 
 The original Exception cases in rfc2560 define the following semantics:
 
   http://tools.ietf.org/html/rfc2560#section-2.3
 
2.3 Exception Cases
 
In case of errors, the OCSP Responder may return an error message.
These messages are not signed. Errors can be of the following types:
 
-- malformedRequest
-- internalError
-- tryLater
-- sigRequired
-- unauthorized
 
  [...]
 
The response sigRequired is returned in cases where the server
requires the client sign the request in order to construct a
response.
 
The response unauthorized is returned in cases where the client is
not authorized to make this query to this server.
 
 
 The proposed extended semantics from the rfc2560bis draft:
 
   http://tools.ietf.org/html/draft-ietf-pkix-rfc2560bis-15#page-9
 
The response unauthorized is returned in cases where the client is
not authorized to make this query to this server or the server is
not
capable of responding authoritatively (cf. [RFC5019], Section
2.2.3).
 
 The rfc5019 semantics The server can not provide an authoritative
 response
 to this specific request is incompatible with the semantics you are
not
 authorized to sumbit OCSP requests to this service.
 
 There is another serious conflict with the rfc5019 repurposed error
code
 semantics and rfc2560.  While rfc5019 is limited to a single status
 request,
 rfc2560 and rfc2560bis both allow a list of several Requests to
 be sent in a single OCSPRequest PDU.  An OCSP response, however, is
 not allowed to contain responseBytes when an error code is returned
 inthe response status:
 
   
http://tools.ietf.org/html/draft-ietf-pkix-rfc2560bis-15#section-4.2.1
 
4.2.1 ASN.1 Specification of the OCSP Response
 
An OCSP response at a minimum consists of a responseStatus

Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-26 Thread Stefan Santesson
On 3/26/13 12:13 PM, Martin Rex m...@sap.com wrote:

Adding 3 more OCSPResponseStatus error codes { no_authoritative_data(7),
single_requests_only(8), unsupported_extension(8) } with well-defined and
conflict-free semantics to the existing enum would be perfectly backwards
compatible.

Of course it is backwards compatible with the standard, but not with the
installed base.

What would happen to the installed base of clients if OCSP responders
would change from current unauthorized to one of your new error codes?

/Stefan




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-26 Thread Stefan Santesson
What OCSP client are you using that behaves like this?

On 3/26/13 1:09 PM, Martin Rex m...@sap.com wrote:

I would no longer get a popup from my OCSP client that tells my
that I'm unauthorized to submit OCSPRequests to that server, and that
the server has been moved to a blacklist




Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-26 Thread Stefan Santesson
I take it that the answer to my question is none.

Which is what I suspected. The semantics of unauthorized does not give
you the basis for such functionality.
And 5019 is very widely deployed.

I'm going to sep down from this discussion and see how it goes.
It is not me you have to convince. It's the community of implementers.

/Stefan

On 3/26/13 1:39 PM, Martin Rex m...@sap.com wrote:

Stefan Santesson wrote:
 What OCSP client are you using that behaves like this?
 
 On 3/26/13 1:09 PM, Martin Rex m...@sap.com wrote:
 
 I would no longer get a popup from my OCSP client that tells my
 that I'm unauthorized to submit OCSPRequests to that server, and that
 the server has been moved to a blacklist

Every sensible implementation of rfc2560 that does not happen to
be based on rfc5019.

I knew about rfc2560 for several years, but I only learned about the
existence of rfc5019 a few weeks ago -- because of the bogus change
to the unauthorized semantics in the rfc2560bis I-D.


-Martin




Re: Gen-ART review of draft-ietf-pkix-rfc2560bis-15

2013-03-26 Thread Stefan Santesson
Hi David,

Thanks for the review.
My reply in line.

On 3/26/13 1:25 AM, Black, David david.bl...@emc.com wrote:

Authors,

I am the assigned Gen-ART reviewer for this draft. For background on
Gen-ART, please
see the FAQ at http://wiki.tools.ietf.org/area/gen/trac/wiki/GenArtfaq.

Please resolve these comments along with any other Last Call comments you
may receive.

Document: draft-ietf-pkix-rfc2560bis-15
Reviewer: David L. Black
Review Date: March 25, 2013
IETF LC End Date: March 27, 2013

Summary:
This draft is on the right track but has open issues, described in the
review.

This draft updates the OCSP protocol for obtaining certificate status
with some minor extensions.

Because this is a bis draft, I reviewed the diffs against RFC 2560.

I did not check the ASN.1.  I also did not see a writeup for this draft
in the data tracker, and so will rely on the document shepherd to
ensure that the ASN.1 has been checked when the writeup is prepared.

I found five open issues, all of which are minor, plus one idnits item
that is probably ok, but should be double-checked.

Minor issues:

[1] Section 2.2:

   NOTE: The revoked state for known non-issued certificate serial   
   numbers is allowed in order to reduce the risk of relying   
   parties using CRLs as a fall back mechanism, which would be 
   considerably higher if an unknown response was returned.

Given this explanation, I'm surprised that the use of revoked instead of
unknown for a known non-issued certificate is a MAY requirement and
not a SHOULD requirement.  Why is that the case?

It appears that the reason is that the use of revoked in this situation
may be dangerous when serial numbers can be predicted for certificates
that
will be issued in the future.  If that's what's going on, this concern is
already explained in the security considerations section, but it should
also be mentioned here for completeness.

No, this is not the main reason. The main reason is the one stated as a
Note: in this section:

NOTE: The revoked state for known non-issued certificate serial numbers
is allowed in order to reduce the risk of relying parties using CRLs as a
fall back mechanism, which would be considerably higher if an unknown
response was returned.



[2] Section 4.2.2.2:

   The key that signs a certificate's status information need not be the
   same key that signed the certificate. It is necessary however to
   ensure that the entity signing this information is authorized to do
   so.  Therefore, a certificate's issuer MAY either sign the OCSP
   responses itself or it MAY explicitly designate this authority to
   another entity.

The two instances of MAY in the above text were both MUST in RFC 2560.

The RFC 2560 text construction of MUST or MUST is a bit odd, but the
two
MAYs in this draft are even worse, as they allow MAY do something else
entirely, despite being enclosed in an either-or construct.  I strongly
suspect that the latter was not intended, so the following would be
clearer:

   The key that signs a certificate's status information need not be the
   same key that signed the certificate. It is necessary however to
   ensure that the entity signing this information is authorized to do
   so.  Therefore, a certificate's issuer MUST do one of the following:
   - sign the OCSP responses itself, or
   - explicitly designate this authority to another entity.


I Agree. I will adopt your text.


[3] Section 4.3:

Is the SHOULD requirement still appropriate for the DSA with SHA-1 combo
(vs. a MAY requirement)?  This requirement was a MUST in RFC 2560, but
I wonder about actual usage of DSA in practice.

The change in algorithm requirements was provided by RFC 6277, and further
refined in this draft in accordance with requests from Sean Turner.


[4] Section 5, last paragraph:

   Responding a revoked state to certificate that has never been 
   issued may enable someone to obtain a revocation response for a 
   certificate that is not yet issued, but soon will be issued, if the 
   CA issues certificates using sequential certificate serial number   
   assignment.

The above text after starting with the if is too narrow - it should say:

   if the certificate serial number of the certificate that
   will be issued can be predicted or guessed by the requester.
   Such prediction is easy for a CA that issues certificates
   using sequential certificate serial number assignment.

There's also a nit in original text - its first line should be:

   Responding with a revoked state for a certificate that has never been 

Good suggestions. I will update accordingly.


[5] Section 5.1.1:

   In archival applications it is quite possible that an OCSP responder
   might be asked to report the validity of a certificate on a date in 
   the distant past. Such a certificate 

Re: [pkix] Last Call: draft-ietf-pkix-rfc2560bis-15.txt (X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP) to Proposed Standard

2013-03-25 Thread Stefan Santesson
Martin,

Whether we like it or not. This is the legacy.
There is no way for a client to know whether the OCSP responder implements
RFC 2560 only or in combination with RFC 5019.

So therefore, the update that was introduced in 5019 must be expected by
all clients from all responders. Therefore it is included in 2560bis.

What in your mind, would be a better solution?

/Stefan

On 3/23/13 7:52 AM, Martin Rex m...@sap.com wrote:

The IESG wrote:
 
 The IESG has received a request from the Public-Key Infrastructure
 (X.509) WG (pkix) to consider the following document:
 - 'X.509 Internet Public Key Infrastructure Online Certificate Status
Protocol - OCSP'
   draft-ietf-pkix-rfc2560bis-15.txt as Proposed Standard
 
 The IESG plans to make a decision in the next few weeks, and solicits
 final comments on this action. Please send substantive comments to the
 ietf@ietf.org mailing lists by 2013-03-27.

I'm having an issue with a subtle, backwards-incompatible change
of the semantics of the exception case with the error code
unauthorized, which tries to rewrite history 13 years into the
without actually fitting the OCSP spec.

It's about the second change from the introduction:

 o  Section 2.3 extends the use of the unauthorized error
 response, as specified in [RFC5019].

While it is true that the error code abuse originally first appeared
in rfc5019, the change was never declared as an update to rfc2560,
nor filed as an errata to rfc2560.

The original Exception cases in rfc2560 define the following semantics:

  http://tools.ietf.org/html/rfc2560#section-2.3

   2.3 Exception Cases

   In case of errors, the OCSP Responder may return an error message.
   These messages are not signed. Errors can be of the following types:

   -- malformedRequest
   -- internalError
   -- tryLater
   -- sigRequired
   -- unauthorized

 [...]

   The response sigRequired is returned in cases where the server
   requires the client sign the request in order to construct a
   response.

   The response unauthorized is returned in cases where the client is
   not authorized to make this query to this server.


The proposed extended semantics from the rfc2560bis draft:

  http://tools.ietf.org/html/draft-ietf-pkix-rfc2560bis-15#page-9

   The response unauthorized is returned in cases where the client is
   not authorized to make this query to this server or the server is not
   capable of responding authoritatively (cf. [RFC5019], Section 2.2.3).

The rfc5019 semantics The server can not provide an authoritative
response
to this specific request is incompatible with the semantics you are not
authorized to sumbit OCSP requests to this service.

There is another serious conflict with the rfc5019 repurposed error code
semantics and rfc2560.  While rfc5019 is limited to a single status
request,
rfc2560 and rfc2560bis both allow a list of several Requests to
be sent in a single OCSPRequest PDU.  An OCSP response, however, is
not allowed to contain responseBytes when an error code is returned
inthe response status:

  http://tools.ietf.org/html/draft-ietf-pkix-rfc2560bis-15#section-4.2.1

   4.2.1 ASN.1 Specification of the OCSP Response

   An OCSP response at a minimum consists of a responseStatus field
   indicating the processing status of the prior request. If the value
   of responseStatus is one of the error conditions, responseBytes are
   not set.

   OCSPResponse ::= SEQUENCE {
  responseStatus OCSPResponseStatus,
  responseBytes  [0] EXPLICIT ResponseBytes OPTIONAL }


So it is impossible to convey OCSP responder is not capable of
responding authoritatively for a subset of Requests in the requestList
and regular status for the remaining Requests in the List by using
a repurposed unauthorized error code.

The current draft neither mention this contradiction, nor does it
provide any guidance how an implementation should behave in this
situation.
 

I would appreciate if this problem of draft-*-rfc2560bis could be fixed
prior to making it a successor for rfc2560.


-Martin




Re: Problems using the default ftp settings in NroffEdit for diff display

2011-05-25 Thread Stefan Santesson
Hi Martin,

Yes, you are right indeed (as always).

The original thought with the feature was to allowing me to create a
permanent diff that I could let other view just by posting it's URL query.
I then extended that solution to a temporary post to a free web host with
FTP access. 

I have plans to do what you suggest, but I don't have time to do it atm.
It will be implemented before the summer is over I think.


/Stefan


On 11-05-25 2:31 AM, Martin Rex m...@sap.com wrote:

Stefan Santesson wrote:
 
 It has come to my attention that there is a problem using the default
ftp
 settings for displaying the diff between the current edited draft and
the
 latest published draft using NroffEdit.

If this is just for the purpose of feeding the rfcdiff-script
from tools.ietf.org, then it would make more sense to allow
POSTing a file to that script and supplying a second source as URL,
completely obviating the need for a temporary storage on the internet
that is accessible to the rfcdiff script on tools.ietf.org.

-Martin


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Problems using the default ftp settings in NroffEdit for diff display

2011-05-24 Thread Stefan Santesson
It has come to my attention that there is a problem using the default ftp
settings for displaying the diff between the current edited draft and the
latest published draft using NroffEdit.

The reason is that I signed up for a free ftp service without understanding
that they had a per month max traffic limit.
This feature has shown to be more popular than I anticipated. I have tried
to upgrade the account, but currently with no success.

This problem is easiest avoided if you just configure your own ftp service
to a site that also offers http access to the stored files and then
selecting your own ftp settings to be used by default.

I will figure out a fix to this problem as soon as I have time, but right
now I'm swamped with other stuff.
Sorry for the inconvenience.

BTW, I just released 2.08 with a fix that allows you to either choose the
standard RFC citation in the automated reference builder, or to select that
all obsolete references is marked as obsolete. This is done by menu option.
This change was done on request from users. In 2.07 all obsolete references
was identified as obsolete and you could not deselect that feature.

/Stefan

http://aaa-sec.com/nroffedit/index.html


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Now safe to download NroffEdit 2.05

2011-03-31 Thread Stefan Santesson
My apology for all the recent updates that hit the users of NroffEdit.

I've added quite some features in the recent attempt to push NroffEdit to
the next level, not only as an easy to use editing tool, but also as a
review tool with improved text2Nroff conversion, improved reading functions,
diff integration and ability to publish permanent diff URL:s to others to
show suggested changes.

The full list of recent changes can be found here:
http://aaa-sec.com/nroffedit/nroffedit/faqbugs.html

All changes from version 1.40 up to 2.05 have been added during the last
couple of weeks.

If you got tired and turned off the auto update feature due to the many
updates, then it is now safe to download version 2.05.
It appears to be solid and stable and is probably the last version for a
while as other things needs my attention.

Any questions or suggestions can be sent to  tools-disc...@ietf.org or to
me privately.
Thanks for all feedback that helped this version get better.

/Stefan Santesson



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-25 Thread Stefan Santesson
Great thoughts from many people.

I just want to clarify a few things as I see that my message is slightly
misunderstood.

Firstly:
The core of my opinion is NOT that I think people should convert to nroff
encoding or XML coding or XHTML encoding or whatever encoding as editing
language.
I don't think that authors in the future should have to deal with any
markup language at all. That is a task for the editing application, not
the drafter.

The only way that can happen is if we can provide the missing link between
draft editing and markup. The current formats as they are specified and
used today are both crippled.

Nroff can capture exact formatting but can't capture more advanced content
building metadata very well. I had to add a directive layer (only
understood by NroffEdit) inside Nroff comments to make that happen. The
output is still compatible with any nroff compiler, but such compiler will
not understand how to use the NroffEdit directives to alter e.g. Table of
content as the document evolves (the actual subject of this thread).

XML as it is used can capture content building metadata but not format.

That means that I can't build an editing tool that can capture the editing
process in standardized markup.
As long as this is the case, no editing tools free of markup hacking can
be developed that can interoperate with other editing tools on a full
scale.

Secondly:
The reason why I personally use Nroff as edit format is:

1) That how I started off and I have seen no compelling reason yet to
switch.
2) I find it personally the least evil format for the text writing process.
3) I managed to overcome many of the backsides with nroff (Table of
content building, reference generation etc) in my NroffEdit tool.
4) I like the WYSIWYG experience in my tool. To always be able to
immediately see the result of my editing makes me a better writer.

I have no reason to try to convince anyone to use anything but what works
best for you.

The most I can which for by making NroffEdit available is to put up a
viable darn-easy-to-use alternative to xml editing to inspire further
development of better tools.
Kudos to Julian and others involved with xml2rfc for you efforts!


/Stefan





On 11-03-25 7:21 PM, John C Klensin john-i...@jck.com wrote:



--On Friday, March 25, 2011 13:06 -0400 Andrew G. Malis
agma...@gmail.com wrote:

 I know that XML is the wave of the future, but I just want to
 give Stefan a plug as a happy user that NroffEdit makes the
 mechanical and formatting part of writing drafts almost
 effortless.

And, had it appeared a decade ago, I might be using it too.  As
it is, it would require my learning something new when I have a
pair of adequate (for me) solutions... and I'm at least as
vunerable to what I know is better as anyone else.  Wave of
the future doesn't interest me nearly as much as being sure
that whatever tools people find convenient and are willing to
support as needed remain usable and, in particular,  that we
don't find ourselves requiring the use of one particular tool...
no matter what it is.

 john



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-24 Thread Stefan Santesson
Ned,

On 11-03-24 9:48 PM, Ned Freed ned.fr...@mrochek.com wrote:

 I can't escape the feeling that this discussion of using markup language
 editing to produce RFCs, is a bit upside down.

 I'm much more concerned with draft writers having to deal with markup
 syntax than I am about drafters trying to put a page break in a sensible
 location, or format their text in a readable fashion.

 The latter is not a problem that consumes a lot of energy, neither do I
 believe that drafters concern with readability is a matter that causes
the
 RFC production center a lot of headache. So why is this a matter of
 concern?

 I honestly think people waste a lot more time trying to figure out how
to
 properly form correct markup syntax, than they do with format tweaking.

My experience has been the exact opposite. Markup syntax is a known
quantity
that is easily accomodated, especially if you use a markup-aware editor.
The
editor I use closes elements automatically, provides constant syntax
checks,
and lets me toggle sections of the document in and out of view.

It's been a very long time since I've given any real thought to the
supposed
difficulties of dealing with markup syntax.

But you are probably pretty experienced user and you probably spent some
time setting up your environment to get where you are.

I believe having to deal with markup syntax poses a significant barrier to
those not as experienced as you.


But page breaks... I have on more occasions than I care to recall spent a
swacking big chunk of time adjusting them. Fix one widow, an orphan
appears
somewhere else. And yes, I realize this is not really necessary for I-Ds,
but
when the breaks are really bad I just can't help but try and fix them.

It's been a very long time since I experienced any problem with
formatting. :)
That was in the old days when I used a separate Nroff compiler. Using
NroffEdit's side by side view of source and text has completely removed
that issue for me.
And I think that is true also for an inexperienced user.


 In my ideal world, where XML would work at its best, drafters would
 concentrate on writing text in a fashion that could be captured into XML
 (or any functional markup language), making XML the output of the
editing
 process rather than the input.

Brian Reid once came up with a nice term for what results when this goal
is
pursued to it's logical conclusion: What You Get is What You Deserve.

Great one And so true.


 That way it would not hurt the drafters if the XML syntax was extended
to
 capture both content and format, making it a complete input to the
 rendering process.

 Given the rather primitive structure of RFCs, writing such editor seem
not
 to be such a grim task. I'm even tempted to provide one in the next
major
 version of NroffEdit, where you could choose nroff and/or XML as markup,
 but never bother with it when writing your draft.

The task may not be grim, but the end results of such exercises - and
there
have been a lot of them - usually are.


I believe you are right, looking in the mirror. But time changes. I think
this is an area where open source development and open source libraries
really has provided a revolution. If we start with creating the
specifications that would allow such tool to be created, then you don't
need a huge software organization and kazillions of dollar any more to
piece together something that actually could be really useful..

I know I'm an idealist.. I still believe in simplicity.

/Stefan 




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-23 Thread Stefan Santesson
I can't escape the feeling that this discussion of using markup language
editing to produce RFCs, is a bit upside down.

I'm much more concerned with draft writers having to deal with markup
syntax than I am about drafters trying to put a page break in a sensible
location, or format their text in a readable fashion.

The latter is not a problem that consumes a lot of energy, neither do I
believe that drafters concern with readability is a matter that causes the
RFC production center a lot of headache. So why is this a matter of
concern?

I honestly think people waste a lot more time trying to figure out how to
properly form correct markup syntax, than they do with format tweaking.

In my ideal world, where XML would work at its best, drafters would
concentrate on writing text in a fashion that could be captured into XML
(or any functional markup language), making XML the output of the editing
process rather than the input.

That way it would not hurt the drafters if the XML syntax was extended to
capture both content and format, making it a complete input to the
rendering process.

Given the rather primitive structure of RFCs, writing such editor seem not
to be such a grim task. I'm even tempted to provide one in the next major
version of NroffEdit, where you could choose nroff and/or XML as markup,
but never bother with it when writing your draft.

If the XML syntax was expanded to capture formatting then that could
indeed render nroff obsolete . Right now I would have to keep relying on
nroff to ensure that the document is rendered as written.

/Stefan




On 11-03-21 12:28 PM, John C Klensin john-i...@jck.com wrote:



--On Thursday, March 17, 2011 12:36 -0400 Tony Hansen
t...@att.com wrote:

 If we're going to put more work into xml2rfc, I would much
 rather figure out what the production people are doing with
 nroff that xml2rfc doesn't currenty do, and add twiddeles so
 they can do that in xml2rfc and skip the nroff completely.
 
 Yup, this exactly matches conversations I and others have been
 having with the RFC production center.
 
 Conversations along these lines have also been a part of why
 there's the xml2rfc SoW currently in progress: to generate a
 better code base from which modifications to xml2rfc can be
 more easily made.

Tony,

While I believe this is a fine objective, I want to point out
one issue: the big advantage of generic markup (XML or
otherwise) over finely-controlled formatting markup (nroff or
otherwise) is that the former eliminates the need for authors
(and others early in the publication process) to worry about
formatting and, indeed, keeps them away from it.  The more one
goes down the path of letting (or, worse, encouraging or
requiring) authors fine-tune formatting and layout, the more we
reduce the advantages of generic markup.  In the extreme case,
xml2rfc could deteriorate into what might be described as nroff
plus a bunch of macros in an XML-like syntax.

I don't think we are there or that we are at immediate risk of
going there.  But I think we need to exercise caution.

In particular, if the idea is for the RFC Production Center to
be able to do detailed formatting (like page boundary tweaking)
using the general xml2rfc syntax and tools, I suggest that:

First, people think about whether there is a way to express the
requirements generically.   For example, a lot of the page
boundary tweaking that the Production Center has to do is
because the xml2rfc processing engine isn't good enough at
handling widow and orphan text.   If changes were made to the
engine to, e.g., bind section titles more closely to section
body text, and generally to permit the needed relationships to
be expressed better in generic markup, the requirement for
formatting-tweaking might be greatly reduced.

Second, if formatting control must be (further) introduced into
xml2rfc in order to make page layout control possible, can we do
it by inventing a processing directive family separate from
?rfc...? If we had ?rfc... as something I-D authors were
expected to use a ?rfcformat... as something used only in
final formatting production, possibly even generating a comment
from nits checkers if present earlier, we would be, IMO, lots
better off --and lots closer to common publications industry
practice-- than mixing them together.

john


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Change notice of the handling of escape characters in NroffEdit

2011-03-23 Thread Stefan Santesson
I recently discovered a bug in NroffEdits handling of the escape character
backslash \.

If you are using NroffEdit today and write drafts where backslash is used in
your draft text, then the output of your draft may change when upgrading to
NroffEdit 2.02 (released yesterday).

There is however a very easy way to find out if this is an issue for your
drafts.

NroffEdit now integrates with the IETF diff tool. In a single command,
NroffEdit will get you an IETF tools diff between your current state of
editing against the latest published version of your draft. It is advisable
to run this check to see if the fixed escape character handling has caused
any changes to your text output.

If you run into problems, then you will find good examples of escape
character handling in the new updated template.
Please let me know if you run into any problems.

/Stefan 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-17 Thread Stefan Santesson
Julian,

I'm not sure what you have in mind that would change the page breaks.

NroffEdit accomplish this by iterating the task a number of times.

The following steps are executed:

1) Analyzing the Nroff document to determine which headings are present
and their data.
2) Compiles the text document.
3) Analyzing the text document to figure out on which pages the headings
appeared
4) Pasting the preliminary ToC in the Nroff document
5) compiling the text to allow pages breaks to adjust due to the added ToC
6) Analyzing the text to figure out where the headings finally ended up
7) Updating the ToC with final data.


These steps happens each time you press update F3 or when you save the
document. The ToC is always correct in the compiled txt document.

/Stefan





On 11-03-17 3:12 PM, Julian Reschke julian.resc...@gmx.de wrote:

On 17.03.2011 14:59, Martin Rex wrote:
 Julian Reschke wrote:

 On 17.03.2011 01:07, Stefan Santesson wrote:
 ...
 This is not correct.

 The automatic ToC function (and now since version 1.40 also the
automated
 reference function) operates using commands hidden behind Nroff
comments.
 A standard NROFF compiler will ignore the comments and process the
ToC as
 if it was manually entered. The ToC will match the document even if
 compiled by a standard NROFF compiler.
 ...

 Yes, but only as long page breaks aren't changed, right? That is, the
 ToC contains fixed page numbers, no?

 That is a non-issue.  NRoffEdit produces the same output as the
 nroff used by the RFC-Editor.

 If you want to change the text, you would use NRroffEdit to do it
 and update the TOC accordingly by a simple keypress or menu selection.

Martin,

the context of this was a discussion how to generate a ToC using NROFF.

My comment was regarding the claim that NRoffEdit somehow achieves this;
it does not. It just does exactly what xml2rfc does: it paginates itself
and adjusts the ToC accordingly. Once you feed the nroff output, be it
from NRoffEdit or xml2rfc, into a standard nroff process, it'll get
incorrect as soon as page breaks change.

This is relevant as changing vertical whitespace is one of the remaining
reasons why the RFC Production Center uses nroff today (besides archival).

Best regards, Julian
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-17 Thread Stefan Santesson
OK, I understand what you say now.

All they have to do is to run it through NroffEdit once more after they
are done with their nroff editing.
They don't use NroffEdit as their main tool for nroff editing, but they do
have it and use it (at least last time I talked to them).

But agreed, NroffEdit can not maintain the ToC while the document is being
changed in another editor. That would be a tough one :)

/Stefan


On 11-03-17 4:42 PM, Julian Reschke julian.resc...@gmx.de wrote:

On 17.03.2011 16:36, Martin Rex wrote:
 Julian Reschke wrote:

 the context of this was a discussion how to generate a ToC using NROFF.

 My comment was regarding the claim that NRoffEdit somehow achieves
this;
 it does not. It just does exactly what xml2rfc does: it paginates
itself
 and adjusts the ToC accordingly. Once you feed the nroff output, be it
 from NRoffEdit or xml2rfc, into a standard nroff process, it'll get
 incorrect as soon as page breaks change.

 The page break will NOT change.  NRoffEdit produces the exact same
 page breaks as the standard nroff process.

Yes.

But it can only maintain the ToC as long as it is used.

The RFC Production Center, according to the xml2rfc SoW, does *not* use
NRoffEdit, but a standard nroff installation.

Once you export from NRoffEdit, and start changing page breaks, the ToC
will become incorrect.

If you say *inside* the tools (be it NRoffEdit or xml2rfc), there is no
such problem.

Best regards, Julian
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-17 Thread Stefan Santesson
It's up to them, but it could easily be done if they want to.

It could even easily be done even if there is no nroff since NroffEdit can
generate nroff from text and then generate the ToC.


/Stefan


On 11-03-17 5:03 PM, Julian Reschke julian.resc...@gmx.de wrote:

On 17.03.2011 16:55, Stefan Santesson wrote:
 OK, I understand what you say now.

 All they have to do is to run it through NroffEdit once more after they
 are done with their nroff editing.
 They don't use NroffEdit as their main tool for nroff editing, but they
do
 have it and use it (at least last time I talked to them).

 But agreed, NroffEdit can not maintain the ToC while the document is
being
 changed in another editor. That would be a tough one :)

 /Stefan

If this (running NroffEdit as a postprocessing step) could be
established as standard procedure, this would simplify the output target
for the xml2rfc SoW.

Best regards, Julian


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: New version of NroffEdit released for IETF80

2011-03-17 Thread Stefan Santesson
While the heat is on...

I decided to clear my list of requested features and just pushed out
version 1.51 of NroffEdit out the door.
This includes selectable fonts styles and sizes, styling of the text
output and automatic warning if you produce lines longer than 72
characters etc. The page viewing function has also been improved and
utilizes the selected font size.

Please don't hesitate to let me know if you miss anything (or like
anything). I try to maintain a list for long dark winter nights...

/Stefan

http://aaa-sec.com/nroffedit/nroffedit/download.html



On 11-03-14 7:57 AM, Stefan Santesson ste...@aaa-sec.com wrote:

I have made some significant improvements to the NroffEdit IETF draft
editor.

Most notably, NroffEdit will now build your list of references
automatically based on the IETF citation library.
A new look for update function will automatically download the latest
version of the library as well as update template files and check if there
is a more recent version of NroffEdit available.

Thus I will no longer have to advertise updates.

The new features are exemplified in the new template file for new drafts.

More information and files for download are available from
http://aaa-sec.com/nroffedit/index.html

I will hold a tutorial session with Alice Hagens on Sunday March 27, 15.00
where NroffEdit and xml2rfc will be presented and demonstrated.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2011-03-16 Thread Stefan Santesson
Julian,


Sorry for an awfully late response, but just spotted this and thought I
should clarify as author of the NroffEdit tool.




 NRoffEdit is an all-in-one wysiwyg tool in Java that maintains
 the TOC for you (within the .nroff source itself).

Which will only work properly as long the output isn't run through
standard NROFF, and page breaks change. Just saying.

This is not correct.

The automatic ToC function (and now since version 1.40 also the automated
reference function) operates using commands hidden behind Nroff comments.
A standard NROFF compiler will ignore the comments and process the ToC as
if it was manually entered. The ToC will match the document even if
compiled by a standard NROFF compiler.

/Stefan




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


New version of NroffEdit released for IETF80

2011-03-14 Thread Stefan Santesson
I have made some significant improvements to the NroffEdit IETF draft
editor.

Most notably, NroffEdit will now build your list of references
automatically based on the IETF citation library.
A new look for update function will automatically download the latest
version of the library as well as update template files and check if there
is a more recent version of NroffEdit available.

Thus I will no longer have to advertise updates.

The new features are exemplified in the new template file for new drafts.

More information and files for download are available from
http://aaa-sec.com/nroffedit/index.html

I will hold a tutorial session with Alice Hagens on Sunday March 27, 15.00
where NroffEdit and xml2rfc will be presented and demonstrated.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-14 Thread Stefan Santesson
Peter,

I'm not driven by any desire to mandate more complex name comparison than is
called for. I just tend to think that it is better leave name comparison
requirement up to local policy.

However I think your argumentation is reasonable here and you convinced me.
I agree that in the general cases you are not interested in the host DNS
name.

I've decided that I'm perfectly fine with your proposed wording.

/Stefan

On 10-09-14 5:32 AM, Peter Saint-Andre stpe...@stpeter.im wrote:

 On 9/13/10 6:03 PM, Stefan Santesson wrote:
 Peter,
 Comments in line;
 
 Ditto.
 
 On 10-09-13 9:16 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 
 On 9/13/10 12:39 PM, Stefan Santesson wrote:
 Peter,
 
 On 10-09-13 6:08 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 
 Hi Shumon,
 
 As I see it, this I-D is attempting to capture best current practices
 regarding the issuance and checking of certificates containing
 application server identities. Do we have evidence that any existing
 certification authorities issue certificates containing both an SRVname
 for the source domain (e.g., example.com) and dNSName for the target
 domain (e.g., apphosting.example.net)? Do we have evidence that any
 existing application clients perform such checks? If not, I would
 consider such complications to be out of scope for this I-D.
 
 That said, we need to be aware that if such usage arises in the future,
 someone might write a document that updates or obsoletes this I-D; in
 fact the present authors very much expect that such documents will
 emerge after the Internet community (specifically certification
 authorities, application service providers, and application client
 developers) have gained more experience with PKIX certificates in the
 context of various application technologies.
 
 Peter
 
 I would like to turn the question around and ask why this specification
 need
 to have an opinion on whether a relying party feels he have to check both
 host name and service?
 
 Stop right there. :) I sense a possible source of confusion. What do you
 mean by host name and what do you mean by service?
 
 Sorry for sloppy use of words.
 
 With host name I mean here the actual DNS name of the host, which might be
 host1.example.com (dNSName)
 
 Or which might be apphost37.example.net or whatever...
 
 By service I mean the service under a given domain, which for the same host
 might be _xmpp.example.com (SRVName)
 
 OK.
 
 But what does dNSName=host1.example.com + SRVName=_xmpp.example.com
 actually mean? I see three possibilities:
 
 1. Connect to example.com's XMPP service only if it's hosted on
 host1.example.com
 
 2. Don't connect to host1.example.com if you are looking for something
 other than example.com's XMPP service (i.e., if you want the IMAP
 service or anything else at example.com, terminate the connection)
 
 3. The connection is fine if you're looking for (a) any service at
 host1.example.com, *or* (b) example.com's XMPP service
 
 In my experience running the XMPP ICA for a few years, I found that our
 root CA would issue dNSName=example.com + SRVName=_xmpp.example.com but
 not dNSName=host1.example.com + SRVName=_xmpp.example.com (i.e., the
 name in both dNSName and SRVName was the same). Yes, that is anecdotal,
 but I think it makes some sense, because IMHO a CA is not going to get
 involved with the particular machine that hosts a service (i.e., to
 differentiate between the DNS domain name of the application service and
 the DNS domain name of the hosting machine / provider).
 
 Under the current rules, using this example I read it that the following
 apply:
 
 - If you are just checking the SRVName you will not learn the legitimate
 host DNS name. 
 
 Are there any application clients today that would check only the SRVName?
 
 So a certificate issued to host2.example.com will be accepted
 even if you intended to contact host1.example.com (even if that information
 is in the cert).
 
 In what sense does the application client intend to contact
 host1.example.com? The user controlling a client (a browser or an email
 client or an IM client or whatever) intends to contact example.com, not
 host1.example.com or apphost37.example.net or whatever.
 
 - If you just check the dNSName, you will miss the fact that you talk to the
 desiganted ldap server and not the xmpp server (even if that information is
 in the cert).
 
 Correct. If that is a problem in your user community or operating
 environment, then you need to find a better client.
 
 IMHO it is almost a best practice for the DNS domain names in the
 various presented identifiers of the certificate (dNSName, SRVName,
 uniformResourceIdentifier, etc.) to be the same. I think we in the
 Internet community don't yet have enough experience to say that with
 high confidence, which is why I'm not ready to push on the point in this
 I-D. However, let us consider the example of a delegated domain, such as
 dNSName=apphost37.example.net + SRVName=_xmpp.example.com, which might

Re: Review of draft-saintandre-tls-server-id-check

2010-09-14 Thread Stefan Santesson
Peter,

After the past discussions, the remaining things on my review list are:

General:
consider substituting ³PKIX-based systems² and ³PKIX Certificates² with ³PKI
systems based on RFC 5280² and ³RFC 5280 Certificates², alternatively
include [PKIX] brackets to clarify that it references RFC 5280.
 
 
General: 
I would consider stating that server certificates according to this profile
either MUST or SHOULD have the serverAuth EKU set since it is allways
related to the use of TSL and server authentication. At least it MUST be set
when allowing checks of the CN-ID (see 2.3 below).
 
 
1.3.2 and section 3
SRVName is described as an extension in 1.3.2, but is in fact a Subject Alt
Name (otherName form) RFC 4985.
This error is repeated in section 3, bullet 4, on page 16.
 
1.1. 
s/an application services/an application service
 
1.2
I find the following text is hard to understand:
³(this document addresses only the DNS domain name of the application
service itself, not the entire trust chain)²
I¹m not sure what the DNS domain name has to do with checking the entire
trust chain. Further, this document discuss more name forms than the DNS
domain name.
Perhaps this sentence was meant to say:
(this document only addresses name forms in the leaf server certificate,
not any name forms in the chain of certificate used to validate the server
certificate)
 
 
 
2.3
It would be good if we could restrict the use of CN-ID for storing a domain
name to the case when the serverAuth EKU is set. Requiring the EKU reduce
the probability that the CN-ID appears to be a domain name by accident or is
a domain name in the wrong context.
 
In many deployments, this also affects the name constraints processing to
perform domain name constraints also on the CN attribute.
 
There should at least be a rule stating that any client that accepts the CN
attribute to carry the domain name MUST also perform name constraints on
this attribute using the domain name logic if name constraints is applied to
the path. Failing this requirement poses a security threat if the claimed
domain name in CN-ID violated the name constraints set for domain names.
 
 
 
4.4.3 checking wildcard labels
The restriction to match against only one subdomain seems to not be
compatible with FRC 4592. RFC 4592 states:
 
   A wildcard domain name can have subdomains.  There is no need to
   inspect the subdomains to see if there is another asterisk label in
   any subdomain.
 
Further, I'm pretty sure that the rule of this draft is incompatible with
many deployments of wildcard matching. I recall in Microsoft we allowed any
number of subdomains when I was involved with the CAPI team.
 
 
4.6.2 
States:
 
   In this case, the
   client MUSTverify that the presented certificate matches the cached
   certificate and (if it is an interactive client) MUST notify the
   user if the certificate has changed since the last time a secure
   connection was successfully negotiated
 
How does the client know if the certificate has changed or whether it just
obtained an unauthorized certificate?
 
I guess in some cases it would work but I feel sure there are exception
cases where the client just has a configured certificate but no knowledge of
what it obtained the last time it talked to the server.

 
/Stefan
 
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [certid] Review of draft-saintandre-tls-server-id-check

2010-09-13 Thread Stefan Santesson


On 10-09-13 7:03 PM, Shumon Huque shu...@isc.upenn.edu wrote:
 
 Authorized by whom? I *think* that here the DNS domain name is one that
 the certified subject has itself authorized (perhaps even established
 is better) to provide the desired service. Therefore I suggest an
 alternative wording:
 
  A DNS domain name which the certified subject has
   authorized to provide the identified service.
 
 Peter
 
 I don't think the term authorized makes the situation any
 clearer.
 
 Let's take a concrete example: an IMAP client attempting to
 connect to and use the IMAP service at example.com.
 
 It needs to lookup the _imap._tcp.example.com. DNS SRV record
 to figure out which servers and ports to connect to.
 
 And in the presented certificate, it needs to expect to find an
 SRVName identifier with _imap.example.com as its contents,
 where the _Service and Name components were the same ones it used
 in the SRV query.
 
 There is no need to figure out who authorized what.

I agree here. Both to this and to former speakers stating that the assertion
is made by the CA and no the subject.

I'm struggling with the most easy to understand text, but I think this says
at least the correct thing:

  A DNS domain name, representing a domain for which the certificate
   issuer has asserted that the certified subject is a legitimate
   provider of the identified service.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-13 Thread Stefan Santesson
Peter,

On 10-09-13 6:08 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 
 Hi Shumon,
 
 As I see it, this I-D is attempting to capture best current practices
 regarding the issuance and checking of certificates containing
 application server identities. Do we have evidence that any existing
 certification authorities issue certificates containing both an SRVname
 for the source domain (e.g., example.com) and dNSName for the target
 domain (e.g., apphosting.example.net)? Do we have evidence that any
 existing application clients perform such checks? If not, I would
 consider such complications to be out of scope for this I-D.
 
 That said, we need to be aware that if such usage arises in the future,
 someone might write a document that updates or obsoletes this I-D; in
 fact the present authors very much expect that such documents will
 emerge after the Internet community (specifically certification
 authorities, application service providers, and application client
 developers) have gained more experience with PKIX certificates in the
 context of various application technologies.
 
 Peter

I would like to turn the question around and ask why this specification need
to have an opinion on whether a relying party feels he have to check both
host name and service?

I'm not against describing the typical case, as long as this specification
does not imply that a relying party that has a reason to check two name
types is doing something wrong.

I have no extremely good examples of practical implementation here but
checking both host name and service seems like both extremely easy and good
practice.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-13 Thread Stefan Santesson
Peter,
Comments in line;


On 10-09-13 9:16 PM, Peter Saint-Andre stpe...@stpeter.im wrote:

 On 9/13/10 12:39 PM, Stefan Santesson wrote:
 Peter,
 
 On 10-09-13 6:08 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 
 Hi Shumon,
 
 As I see it, this I-D is attempting to capture best current practices
 regarding the issuance and checking of certificates containing
 application server identities. Do we have evidence that any existing
 certification authorities issue certificates containing both an SRVname
 for the source domain (e.g., example.com) and dNSName for the target
 domain (e.g., apphosting.example.net)? Do we have evidence that any
 existing application clients perform such checks? If not, I would
 consider such complications to be out of scope for this I-D.
 
 That said, we need to be aware that if such usage arises in the future,
 someone might write a document that updates or obsoletes this I-D; in
 fact the present authors very much expect that such documents will
 emerge after the Internet community (specifically certification
 authorities, application service providers, and application client
 developers) have gained more experience with PKIX certificates in the
 context of various application technologies.
 
 Peter
 
 I would like to turn the question around and ask why this specification need
 to have an opinion on whether a relying party feels he have to check both
 host name and service?
 
 Stop right there. :) I sense a possible source of confusion. What do you
 mean by host name and what do you mean by service?
 
Sorry for sloppy use of words.

With host name I mean here the actual DNS name of the host, which might be
host1.example.com (dNSName)

By service I mean the service under a given domain, which for the same host
might be _xmpp.example.com (SRVName)

Under the current rules, using this example I read it that the following
apply:

- If you are just checking the SRVName you will not learn the legitimate
host DNS name. So a certificate issued to host2.example.com will be accepted
even if you intended to contact host1.example.com (even if that information
is in the cert).

- If you just check the dNSName, you will miss the fact that you talk to the
desiganted ldap server and not the xmpp server (even if that information is
in the cert).


 In this I-D, we talk about DNS domain name and service type, which
 map quite well to _Service.Name from RFC 4985: the DNS domain name is
 the source domain provided by the user or configured into the client
 (e.g., example.com) and the service type is a given application
 protocol that could be serviced by the source domain (e.g., IMAP).
 
 This I-D is attempting to gently nudge people in the direction of
 checking both the DNS domain name and the service type. IMHO this is
 consistent with considering the SRVName and uniformResourceIdentifier
 subjectAltName entries as more tightly scoped than dNSName or CN, and
 therefore as potentially more secure in some sense (the subject might
 want to limit use of a particular certificate to only the service type
 identified in the SRVName or uniformResourceIdentifier).
 
 If by host name you mean target domain as defined in the I-D (and
 mapping to Target from RFC 2782) then we have more to discuss.
 
 I'm not against describing the typical case, as long as this specification
 does not imply that a relying party that has a reason to check two name
 types is doing something wrong.
 
 That is not the intent of this I-D, however that would be functionality
 over and above what this I-D defines.
 
 I have no extremely good examples of practical implementation here but
 checking both host name and service seems like both extremely easy and good
 practice.
 
 With respect to revisions to this I-D, the lack of good examples
 troubles me because we have been trying to abstract from common usage,
 not to define guidelines for use cases that have not yet been defined,
 implemented, and deployed.
 
 Given that you would prefer to leave the door open to more advanced
 checking rules, I think you would object to this text in Section 4.3:
 
Once the client has constructed its list of reference identifiers and
has received the server's presented identifiers in the form of a PKIX
certificate, the client checks its reference identifiers against the
presented identifiers for the purpose of finding a match.  It does so
by seeking a match and stopping the search if any presented
identifier matches one of its reference identifiers.  The search
fails if the client exhausts its list of reference identifiers
without finding a match.
 
 You are saying that it is not necessarily appropriate to stop the search
 once a single match is found, because the client might be configured to
 look for multiple matches (e.g., a match against both the source domain
 and the target domain). Would you like to suggest text that covers such
 a case? Here is a possible rewrite of Section 4.3 that might address
 your

Re: [certid] Review of draft-saintandre-tls-server-id-check

2010-09-09 Thread Stefan Santesson
On the issue of checking multiple name forms.

I would put it in another way. Web clients are typically only used to check
the domain name and nothing else because it is the only thing they care
about and know how to match.

PKI enabled clients in general are used to check numerous of name forms and
attributes in order to determine a match.

When you add SRV-ID to the pool you change what is usual in the case of TLS.

I think it is wrong to say as a general rule that a certificate successfully
maps to the appropriate server if either the SRV-Name or the DNS Name
matches. To me this is highly context dependent where different protocols
and applications have different needs.

If the only thing I need to know is that the server is authorized to deliver
the requested service for the requested domain, then SRVName match only is
OK. If you need to know that this host is the host it claims to be, then
it's not.

What needs to be checked is to me a typical case of local policy and one
size does not fit all.

/Stefan




On 10-09-09 8:11 PM, Shumon Huque shu...@isc.upenn.edu wrote:

 On Thu, Sep 09, 2010 at 12:59:29AM +0200, Stefan Santesson wrote:
 Peter,
 
 I don't see the problem with accepting a host names provided by DNS SRV
 record as the reference identity.
 
 Could you elaborate the threat?
 
 Example:
 
 I ask the DNS for the host providing smtp services for example.com
 I get back that the following hosts are available
 
 smtp1.example.com, and;
 smtp2.example.com
 
 I contact the first one using a TSL connection and receives a server
 certificate with the SRVName _smtp.example.com and the dNSName
 smtp1.example.com
 
 The certificate confirms that the host in fact is smtp1.example.com and that
 it is authorized to provide smtp services for the domain example.com.
 
 That is all you need. The host name from the DNS server was setting you on
 the track but is not considered trusted. What you trust is the names in the
 certificate.
 
 This is a more complicated example than the current draft
 addresses.
 
 In your example, the client is verifying a combination of
 identifiers (SRVName and dNSName) in the certificate. This
 seems like a reasonable thing to do, but this is not what
 most clients do today (I'd be happy to be corrected about
 that). Typically, they consider a match successful once they've
 found a single acceptable reference identifier. In that case,
 you can't simply use a reference identifier that contains
 a DNS mapped identifier unless you've obtained it an authenticated
 (or statically configured) manner.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [certid] Review of draft-saintandre-tls-server-id-check

2010-09-09 Thread Stefan Santesson



On 10-09-09 8:38 PM, Shumon Huque shu...@isc.upenn.edu wrote:

 Earlier in RFC 4985, it says:
 
The SRVName, if present, MUST contain a service name and a domain
name in the following form:
 
   _Service.Name
 
The content of the components of this name form MUST be consistent
with the corresponding definition of these components in an SRV RR
according to RFC 2782
 
 I think this was actually clear enough. The subsequent statement that
 Name is The DNS domain name of the domain where the specified service
 is located. (which could mean any of a number of things) confused the
 issue, and probably should not have been in the document.


Agreed, but since it will be an errata, the text must be corrected.

Do you agree with my proposal?

The DNS domain name of a domain for which the certified subject
 is authorized to provide the identified service.

/Stefan 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [certid] Review of draft-saintandre-tls-server-id-check

2010-09-09 Thread Stefan Santesson
Shumon,

On 10-09-09 10:08 PM, Shumon Huque shu...@isc.upenn.edu wrote:

 PKI enabled clients in general are used to check numerous of name forms and
 attributes in order to determine a match.
 
 Can you give us some examples of such applications, and where
 their subject identity matching rules are specified? Appendix
 A (Prior Art) probably should consider them.


Right now I have none that is applicable to the listed protocols. So I don't
think I have an example that is suitable for this annex.
But in general many government services using PKI are comparing multiple
attributes. Many national PKIs in Europe have banned single identifiers in
their certs, so the applications are forced to do multiple attribute
comparisons.

The thing is that name comparison is often done on an application level
according to local policy and even on the user level and the only thing I
have learned after spending 18 years with PKI is to expect almost anything
:)

In this context, EKUs are often also an important part of certificate
acceptance. A dimension that I miss in the current spec.

I don't think it is particularly useful to specify in generic documents what
constitutes a positive identification of the subject in terms or required
matching name forms.
It becomes useful mostly only when you want to achieve interoperability
within a reasonably narrow context.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson
My apology,

I just realized that the document defines source domain as what I thought
would be the target domain

   source domain:  The fully-qualified DNS domain name that a client
  expects an application service to present in the certificate.

Which makes my comments below a bit wrong.

I think it would be better to discuss this in terms of reference
identifier and presented Identifier.

   presented identifier:  An identifier that is presented by a server to
  a client within the server's PKIX certificate when the client
  attempts to establish a secure connection with the server; the
  certificate can include one or more presented identifiers of
  different types.

   reference identifier:  An identifier that is used by the client for
  matching purposes when checking the presented identifiers; the
  client can attempt to match multiple reference identifiers of
  different types.

I see no problem in obtaining the reference identifier from a DNS lookup an
the comparing it with a presented identifier in the certificate.

Why would you require the reference identity to be provided by a human user?

/Stefan



On 10-09-08 3:40 PM, Stefan Santesson ste...@aaa-sec.com wrote:

 Being the author of RFC 4985 I agree with most of you say here.
 
 Comments in line;
 
 On 10-09-06 8:48 PM, Bernard Aboba bernard_ab...@hotmail.com wrote:
 
 That was in fact my original question.
 
 Section 5.1 states that the source domain and service type MUST be
 provided by a human user, and can't be derived.  Yet in an SRV or
 DDDS lookup, it is not the source domain that is derived, it is the
 target domain.  Given that, it's not clear to me what types of DNS
 resolutions are to be discouraged.
 
 
 This puzzled me as well. The domain of interest is the domain where the
 requested service is located = target domain.
 
 As noted elsewhere, RFC 4985 appears to require matching of the
 source domain/service type to the SRV-ID in the certificate.
 
 It is not. RFC 4985 says the following in section 2:
 
   _Service.Name
 
 snip
 
   Name
  The DNS domain name of the domain where the specified service
  is located.
 
 
  Such
 a process would be consistent with a match between user inputs
 (the source domain and service type) and the presented identifier
 (the SRV-ID).  
 
 
 Since this is not the definition of SRVName, this type of matching does not
 apply.
 
 
 Yet, Section 5.1 states:
 
 When the connecting application is an interactive client, the source
domain name and service type MUST be provided by a human user (e.g.
when specifying the server portion of the user's account name on the
server or when explicitly configuring the client to connect to a
particular host or URI as in [SIP-LOC]) and MUST NOT be derived from
the user inputs in an automated fashion (e.g., a host name or domain
name discovered through DNS resolution of the source domain).  This
rule is important because only a match between the user inputs (in
the form of a reference identifier) and a presented identifier
enables the client to be sure that the certificate can legitimately
be used to secure the connection.
 
However, an interactive client MAY provide a configuration setting
that enables a human user to explicitly specify a particular host
name or domain name (called a target domain) to be checked for
connection purposes.
 
 [TP] what I thought was about to be raised here was a contradiction that
 RFC4985
 is all about information gotten from a DNS retrieval whereas the wording of
 s5.1
 in this I-D
 
 the source
domain name and service type  ...  MUST NOT be derived from
the user inputs in an automated fashion (e.g., ... discovered through DNS
 resolution ... 
 
 would appear to exclude DNS resolution.  If DNS resolution is off limits,
 then
 RFC4985 would appear not to apply.
 
 
 RFC 4985 provides the client with a way to authenticate a host that it
 believes is authorized to provide a specific service in the target domain.
 
 It does not matter from where the client has obtained that authorization
 information or whether that information is trustworthy.
 
 A client may very well do an insecure DNS lookup to discover what host is
 providing the requested service. The client would then contact that host and
 obtained it's certificate. If the certificate is trusted and it's SRVName
 matches the information provided from the DNS server, then everything is fine.
 
 The client now has assurance from the CA that this host is in fact authorized
 to provide this service.
 
 
 /Stefan
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson
Being the author of RFC 4985 I agree with most of you say here.

Comments in line;

On 10-09-06 8:48 PM, Bernard Aboba bernard_ab...@hotmail.com wrote:

 That was in fact my original question.
 
 Section 5.1 states that the source domain and service type MUST be
 provided by a human user, and can't be derived.  Yet in an SRV or
 DDDS lookup, it is not the source domain that is derived, it is the
 target domain.  Given that, it's not clear to me what types of DNS
 resolutions are to be discouraged.
 

This puzzled me as well. The domain of interest is the domain where the
requested service is located = target domain.

 As noted elsewhere, RFC 4985 appears to require matching of the
 source domain/service type to the SRV-ID in the certificate.

It is not. RFC 4985 says the following in section 2:

  _Service.Name

snip

  Name
 The DNS domain name of the domain where the specified service
 is located.


  Such
 a process would be consistent with a match between user inputs
 (the source domain and service type) and the presented identifier
 (the SRV-ID).  
 

Since this is not the definition of SRVName, this type of matching does not
apply.

 
 Yet, Section 5.1 states:
 
 When the connecting application is an interactive client, the source
domain name and service type MUST be provided by a human user (e.g.
when specifying the server portion of the user's account name on the
server or when explicitly configuring the client to connect to a
particular host or URI as in [SIP-LOC]) and MUST NOT be derived from
the user inputs in an automated fashion (e.g., a host name or domain
name discovered through DNS resolution of the source domain).  This
rule is important because only a match between the user inputs (in
the form of a reference identifier) and a presented identifier
enables the client to be sure that the certificate can legitimately
be used to secure the connection.
 
However, an interactive client MAY provide a configuration setting
that enables a human user to explicitly specify a particular host
name or domain name (called a target domain) to be checked for
connection purposes.
 
 [TP] what I thought was about to be raised here was a contradiction that
 RFC4985
 is all about information gotten from a DNS retrieval whereas the wording of
 s5.1
 in this I-D
 
 the source
domain name and service type  ...  MUST NOT be derived from
the user inputs in an automated fashion (e.g., ... discovered through DNS
 resolution ... 
 
 would appear to exclude DNS resolution.  If DNS resolution is off limits,
 then
 RFC4985 would appear not to apply.
 

RFC 4985 provides the client with a way to authenticate a host that it
believes is authorized to provide a specific service in the target domain.

It does not matter from where the client has obtained that authorization
information or whether that information is trustworthy.

A client may very well do an insecure DNS lookup to discover what host is
providing the requested service. The client would then contact that host and
obtained it's certificate. If the certificate is trusted and it's SRVName
matches the information provided from the DNS server, then everything is
fine.

The client now has assurance from the CA that this host is in fact
authorized to provide this service.


/Stefan



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson
For clarity, I'll provide two examples where I think it is motivated to
obtain the FQDN reference identifier in an automated fashion and not through
direct user input or configuration (contrary to section 5.1).

1) Obtaining EU Trusted Lists
EU has standardized on XML based lists over national Trust Service Providers
and their certificates. Each country in the EU publish their own list.
The EU commission provides a central list with all the URLs to each national
list.

The first step is for the client to establish a secure connection with the
EU commission and download the EU list. This is done by configuration.

The second step is to automatically obtain reference identifiers for all
national lists from the EU list.


2) Redirects from a trusted service.
If I connect to a trusted service and then get redirected to another host,
it can be reasonable to obtain the reference identifier from the rediricet.
Typical application I can think of is a redirect to a SAML IdP or a SAML
Discovery service.

/Stefan


On 10-09-08 4:21 PM, Stefan Santesson ste...@aaa-sec.com wrote:

 My apology,
 
 I just realized that the document defines source domain as what I thought
 would be the target domain
 
source domain:  The fully-qualified DNS domain name that a client
   expects an application service to present in the certificate.
 
 Which makes my comments below a bit wrong.
 
 I think it would be better to discuss this in terms of reference identifier
 and presented Identifier.
 
presented identifier:  An identifier that is presented by a server to
   a client within the server's PKIX certificate when the client
   attempts to establish a secure connection with the server; the
   certificate can include one or more presented identifiers of
   different types.
 
reference identifier:  An identifier that is used by the client for
   matching purposes when checking the presented identifiers; the
   client can attempt to match multiple reference identifiers of
   different types.
 
 I see no problem in obtaining the reference identifier from a DNS lookup an
 the comparing it with a presented identifier in the certificate.
 
 Why would you require the reference identity to be provided by a human user?
 
 /Stefan
 
 
 
 On 10-09-08 3:40 PM, Stefan Santesson ste...@aaa-sec.com wrote:
 
 Being the author of RFC 4985 I agree with most of you say here.
 
 Comments in line;
 
 On 10-09-06 8:48 PM, Bernard Aboba bernard_ab...@hotmail.com wrote:
 
 That was in fact my original question.
 
 Section 5.1 states that the source domain and service type MUST be
 provided by a human user, and can't be derived.  Yet in an SRV or
 DDDS lookup, it is not the source domain that is derived, it is the
 target domain.  Given that, it's not clear to me what types of DNS
 resolutions are to be discouraged.
 
 
 This puzzled me as well. The domain of interest is the domain where the
 requested service is located = target domain.
 
 As noted elsewhere, RFC 4985 appears to require matching of the
 source domain/service type to the SRV-ID in the certificate.
 
 It is not. RFC 4985 says the following in section 2:
 
   _Service.Name
 
 snip
 
   Name
  The DNS domain name of the domain where the specified service
  is located.
 
 
  Such
 a process would be consistent with a match between user inputs
 (the source domain and service type) and the presented identifier
 (the SRV-ID).  
 
 
 Since this is not the definition of SRVName, this type of matching does not
 apply.
 
 
 Yet, Section 5.1 states:
 
 When the connecting application is an interactive client, the source
domain name and service type MUST be provided by a human user (e.g.
when specifying the server portion of the user's account name on the
server or when explicitly configuring the client to connect to a
particular host or URI as in [SIP-LOC]) and MUST NOT be derived from
the user inputs in an automated fashion (e.g., a host name or domain
name discovered through DNS resolution of the source domain).  This
rule is important because only a match between the user inputs (in
the form of a reference identifier) and a presented identifier
enables the client to be sure that the certificate can legitimately
be used to secure the connection.
 
However, an interactive client MAY provide a configuration setting
that enables a human user to explicitly specify a particular host
name or domain name (called a target domain) to be checked for
connection purposes.
 
 [TP] what I thought was about to be raised here was a contradiction that
 RFC4985
 is all about information gotten from a DNS retrieval whereas the wording of
 s5.1
 in this I-D
 
 the source
domain name and service type  ...  MUST NOT be derived from
the user inputs in an automated fashion (e.g., ... discovered through
 DNS
 resolution ... 
 
 would appear to exclude DNS resolution.  If DNS resolution is off

Re: I-D Action:draft-saintandre-tls-server-id-check-09.txt

2010-09-08 Thread Stefan Santesson
First of all, I'm sorry for my late review.

Time has been totally crazy after my vacation and I have worked nights and
weekends in order to get to the point where I could go through this with the
care it deserves.

In case it matters, here are my comments:

General:
consider substituting ³PKIX-based systems² and ³PKIX Certificates² with ³PKI
systems based on RFC 5280² and ³RFC 5280 Certificates², alternatively
include [PKIX] brackets to clarify that it references RFC 5280.
 
 
General:
I find the distinction of target and source domain as both confusing and
unnecessary for defining rules for name matching. This seems only to be
relevant in the discussion of whether the source information need to be
derived from a user but I don't find that distinction useful. See discussion
concerning section 5.1. I think the document would benefit from reducing
this discussion to the distinction between reference identifier and
presented identifier.
 
 
General: 
I would consider stating that server certificates according to this profile
either MUST or SHOULD have the serverAuth EKU set. At least it MUST be set
when allowing checks of the CN-ID (see 2.3 below).
 
 
1.3.2 and section 3
SRVName is described as an extension in 1.3.2, but is in fact a Subject Alt
Name (otherName form) RFC 4985.
This error is repeated in section 3, bullet 4, on page 16.
 
1.1. 
s/an application services/an application service
 
1.2
I find the following text is hard to understand:
³(this document addresses only the DNS domain name of the application
service itself, not the entire trust chain)²
I¹m not sure what the DNS domain name has to do with checking the entire
trust chain. Further, this document discuss more name forms than the DNS
domain name.
Perhaps this sentence was meant to say:
(this document only addresses name forms in the leaf server certificate,
not any name forms in the chain of certificate used to validate the server
certificate)
 
 
 
2.3
Allowing use of CN-ID, especially commonName for storing a domain name
should be restricted to the use of the serverAuth EKU. It should not be
allowed to do this matching in absence of this EKU. Requiring the EKU reduce
the probability that the CN-ID appears to be a domain name by accident or is
a domain name in the wrong context.
 
In many deployments, this also affects the name constraints processing to
perform domain name constraints also on the CN attribute.
 
There should be a rule stating that any client that accepts the CN attribute
to carry the domain name MUST also perform name constraints on this
attribute using the domain name logic if name constraints is applied to the
path. Failing this requirement poses a security threat if the claimed domain
name in CN-ID violated the neame constrints set for domain names.
 
 
 
4.4.3 checking wildcard labels
The restriction to match agains only one subdomain seems to not be
compatible with FRC 4592. RFC 4592 states:
 
   A wildcard domain name can have subdomains.  There is no need to
   inspect the subdomains to see if there is another asterisk label in
   any subdomain.
 
Further, I'm pretty sure that the rule of this draft is incompatible with
many deployments of wildcard matching.
 
 
4.6.2 
States:
 
   In this case, the
   client MUSTverify that the presented certificate matches the cached
   certificate and (if it is an interactive client) MUST notify the
   user if the certificate has changed since the last time a secure
   connection was successfully negotiated
 
How does the client know if the certificate has changed or whether it just
obtained an unauthorized certificate?
 
I guess in some cases it would work but I feel sure there are exception
cases where the client just has a configured certificate but no knowledge of
what it obtained the last time it talked to the server.
 
 
 
5.1. Service delegation:
It seems reasonable that service type can be selected by the client
application (which obviously may know what service it want to communicate
with) and consequently need not to be provided by a human user).
The requirement in this section that the service name MUST be provided by
user input:

   When the connecting application is an interactive client, the source
   domain name and _service type_ MUST be provided by a human user (e.g.
   when specifying the server portion of the user's account name on the
   server or when explicitly configuring the client to connect to a
   particular host or URI as in [SIP-LOC]) and MUST NOT be derived from
   the user inputs in an automated fashion (e.g., a host name or domain
   name discovered through DNS resolution of the source domain).

Seems to contradict the corresponding description in 3.1:
 
   o  An SRV-ID can be either direct (provided by a user) or more
  typically indirect (resolved by a client) and is restricted (can
  be used for only a single application).
 
I can also see valid cases where the domain name is provided by automatic
resolution of information retrieved 

Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson



On 10-09-08 9:53 PM, Shumon Huque shu...@isc.upenn.edu wrote:

 If the reference identifier is  _Service.Name then the match is being done
 on the *input* to the SRV lookup process, not the output, and prohibition on
 DNS lookups would not apply (or even make any sense).
 
 Yes.
 
 The output of the SRV record lookup contains a target hostname,
 not a service name, so it's not applicable to the SRVName name
 form. The target could be used in another name form (dNSName)
 as the reference identifier, but then the client needs to convince
 itself that the lookup was done securely (DNSSEC or some other
 means) otherwise there's a security problem.

I disagree,

A client can use the output from the DNS lookup also from a normal insecure
DNS server.

The only thing the client need to do is to verify that the domain name
provided in the input to the lookup matches the host names provided in the
output. It can then safely use the host names in the SRV record as reference
identifiers IF the SRV-ID in the server certificate matches the the
reference identifier.

A false host represented by a false identifier from a bad DNS server will
not be able to present a trusted certificate that supports it's claim to be
an authorized provider of the requested service for the domain in question.

/Stefan
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson
Peter,

I don't see the problem with accepting a host names provided by DNS SRV
record as the reference identity.

Could you elaborate the threat?

Example:

I ask the DNS for the host providing smtp services for example.com
I get back that the following hosts are available

smtp1.example.com, and;
smtp2.example.com

I contact the first one using a TSL connection and receives a server
certificate with the SRVName _smtp.example.com and the dNSName
smtp1.example.com

The certificate confirms that the host in fact is smtp1.example.com and that
it is authorized to provide smtp services for the domain example.com.

That is all you need. The host name from the DNS server was setting you on
the track but is not considered trusted. What you trust is the names in the
certificate.



As to IdP services. This type of redirect is extremely common and widely
deployed in SAML. 

Example: The client connects to a service by choice. That service is
retrieving the metadata for the identity federation from a central database.
Among other things this metadata contains URLs to all approved Identity
provides (IdP) in the federation.

In order to allow the user to select the appropriate IdP service the user
may first be directed to a SAML discovery service (often called WAYF =
Where Are You From). The WAYF service redirects the user back to the
original service with information about the selected IdP. The Service
locates the IdP from the metadata and redirects the user to its IdP for
identification. The user is then handed a ticket (SAML assertion) and gets
redirected back to the service with the ticket.

Again, this is very common and designed and has undergone thorough security
reviews in a very large community.

I think it is very bold of the IETF to claim that SAML got it wrong and
should not be allowed.

Unfortunately, for these reasons I still don't think the proposed text is
satisfactory

/Stefan


On 10-09-09 12:01 AM, Peter Saint-Andre stpe...@stpeter.im wrote:

 On 9/8/10 11:28 AM, Stefan Santesson wrote:
 For clarity, I'll provide two examples where I think it is motivated to
 obtain the FQDN reference identifier in an automated fashion and not through
 direct user input or configuration (contrary to section 5.1).
 
 Thanks for the examples.
 
 1) Obtaining EU Trusted Lists
 EU has standardized on XML based lists over national Trust Service Providers
 and their certificates. Each country in the EU publish their own list.
 The EU commission provides a central list with all the URLs to each national
 list.
 
 The first step is for the client to establish a secure connection with the
 EU commission and download the EU list. This is done by configuration.
 
 The second step is to automatically obtain reference identifiers for all
 national lists from the EU list.
 
 
 2) Redirects from a trusted service.
 If I connect to a trusted service and then get redirected to another host,
 it can be reasonable to obtain the reference identifier from the rediricet.
 Typical application I can think of is a redirect to a SAML IdP or a SAML
 Discovery service.
 
 It seems to me that in both of these cases, the user has placed trust in
 a certain entity (the EU's system of trust service providers, a
 particular trusted service) and has configured his or her application
 client to allow that entity to transform a source domain into a
 target-domain-based reference identifier for the purposes of secure
 connection. (One could argue that DNSSEC might result in a similar
 arrangement.)
 
 Would the following text address the scenarios you mention?
 
 ###
 
 5.1.  Service Delegation
 
When the connecting application is an interactive client, the source
domain name and service type SHOULD be provided by a human user (e.g.
when specifying the server portion of the user's account name on the
server or when explicitly configuring the client to connect to a
particular host or URI as in [SIP-LOC]) and SHOULD NOT be derived
from the user inputs in an automated fashion (e.g., a host name or
domain name discovered through DNS resolution of the source domain).
This rule is important because only a match between the user inputs
(in the form of a reference identifier) and a presented identifier
enables the client to be sure that the certificate can legitimately
be used to secure the connection.
 
There are several scenarios in which it can be legitimate for an
interactive client to override the recommendation in the foregoing
rule.  Examples include:
 
1.  A human user has explicitly configured the client to associate a
particular target domain with a given source domain (e.g., for
connection and identity checking purposes the user has explicitly
approved apps.example.net as the target domain associated with
a source domain of example.com).
 
2.  A human user has explicitly agreed to trust a service that
provides mappings of source domains to target

Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson
Peter,

Thank for the clarifying example. I see now what problem you are addressing.

Comments in line;

On 10-09-09 12:35 AM, Peter Saint-Andre stpe...@stpeter.im wrote:

stuff deleted

 It is not. RFC 4985 says the following in section 2:
 
   _Service.Name
 
 snip
 
   Name
  The DNS domain name of the domain where the specified service
  is located.
 
 Perhaps some examples would help.
 
 The good folks at example.com have delegated their IM service to
 apps.hosting.net. When the user some...@example.com does an SRV lookup
 for _xmpp-client._tcp im.example.com, its client gets back something
 like this:
 
 20 0 5222 apps.hosting.net.
 
 The client resolves apps.hosting.net to an IP address and connects to
 that machine.
 
 During TLS negotiation, the application service for example.com (which
 is in fact being serviced by apps.hosting.net) presents a certificate
 that contains an SRV-ID. Which of the following is it?
 
 1. _xmpp.example.com
 
 or:
 
 2. _xmpp.apps.hosting.net
 

According to the actual intent of RFC 4985 the right answer is 1, however
reading the definition of the name form it suggests 2, since this is where
the service is located. However I think this is an error in RFC 4985. See
below.

In case of 2, If the DNS fooled you, then you may end up at an authorized
service for hosting.net that has no business serving example.com, and you
have no way to tell.


 If #1, then the Name in _Service.Name is indeed a Name as defined in
 RFC 2782.
 
 If #2, then the Name in _Service.Name is actually a Target as
 defined in RFC 2782.
 


 The client now has assurance from the CA that this host is in fact
 authorized to provide this service.
 
 To use my example, the CA is providing assurance that apps.hosting.net
 is authorized to provide the XMPP service on behalf of example.com. That
 seems reasonable if the presented identifier based on the source domain
 (example.com). However, if the assurance is checked on the client side
 by finding _xmpp.apps.hosting.net as the presented identifier then I
 fail to understand something very basic: how does the client tie that
 SRV-ID to the source domain (example.com) in a secure fashion? The
 presented identifier seems to be a mere assertion without any connection
 whatsoever to the source domain.

I actually think we made an error in 4985 and that the domain name should be
the domain that the service is authorized to represent.

RFC 4985 is ambiguous here: the definition of the name form says:

   The DNS domain name of the domain where the specified service
is located.

This corresponds to #2 in your example
While the description underneath the definition states:

   The purpose of the SRVName is limited to authorization of service
provision within a domain.

Which corresponds to #1.

I think there should be an errata correcting the definition to be:

   The DNS domain name of a domain for which the certified subject
is authorized to provide the identified service.

As it is now, the RFC is ambiguous.

 If we just wave our hands and say the
 client can simply let the user believe that it's OK for apps.hosting.net
 to assert a right to provide the IM service for example.com and it
 doesn't matter if there is no basis for that belief because the
 information might not be trustworthy then I wonder what RFC 4985 really
 accomplishes or whether we want to encourage anyone to use the SRVName
 extension (at least absent DNSSEC, see for example draft-barnes-xmpp-dna).
 

I agree. But if we can correct the definition (or specify a new OID for a
corrected name form) according to my proposal above, and clarify the use in
your document, would that help in any way?

I agree that if the SRVName in the cert provides a domain name that is
different from the domain name you asked for (and it's not DNSSEC), then you
will have to rely on local configuration in order to accept it.

I'm not sure how we should fix this. We had quite large review of this RFC
but nobody caught this error.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-09-08 Thread Stefan Santesson



On 10-09-09 12:10 AM, Peter Saint-Andre stpe...@stpeter.im wrote:

 Aha, I see the source of confusion. I think the first sentence of
 Section 5.1 is better written as follows:
 
When the connecting application is an interactive client,
construction of the reference identifier SHOULD be based on the
source domain and service type provided by a human user

Could we remove and service type from that sentence.
It is the domain name that is the problem and not the service type right?

I mean, the app may very well know that it is supposed to talk to an XMPP
service without asking a human user.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Why the normative form of IETF Standards is ASCII

2010-03-29 Thread Stefan Santesson
Martin,

Thanks for your great review!

On 10-03-26 4:17 PM, Martin Rex m...@sap.com wrote:

 I downloaded the WG document ASCII I-D (14-pages) from
 http://tools.ietf.org/id/draft-ietf...
 loaded it into NRoffEdit, selected Edit-Convert Text to NRoff,
 spent about 30 minutes fixing the Table Of Contents, I-D header
 and some minor formatting defects from the conversion along with
 several existing spelling errors reported by NRoffEdit and formatting
 issues like new sections starting very close to the bottom of pages.
 ... the original author is likely an xml2rfc user without a
 spell checker in his tool-chain.

Just a short comment on this to clarify.

You don't need to fix the table of content in NroffEdit. You just delete the
present one, then select Edit - Paste managed 'Table of Contents', and
NroffEdit will generate a new one for you that is automatically updated as
you edit the draft.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Why the normative form of IETF Standards is ASCII

2010-03-25 Thread Stefan Santesson
Actually, there seems to be one here:
http://sourceforge.net/projects/rfc2xml/

Not sure how much of a good work it does.

/Stefan


On 10-03-24 5:10 PM, Julian Reschke julian.resc...@gmx.de wrote:

 On 25.03.2010 00:56, Stefan Santesson wrote:
 Julian,
 
 One minor question.
 
 How do you use xml2rfc to edit a document when you don't have that document
 in xml format?
 
 You don't.
 
 For example, if it was not originally created using xml2rfc.
 
 Somebody might have converted it (you may want to google for it, or ask
 the RFC Editor). Otherwise, you need to convert.
 
 Anticipating the next question: no, I'm not aware of a tool that does
 that well; in my experience, to get good XML (with proper markup of
 artwork, lists, references...), you really have to do it manually.
 
 Best regards, Julian


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Why the normative form of IETF Standards is ASCII

2010-03-24 Thread Stefan Santesson
Julian,

One minor question.

How do you use xml2rfc to edit a document when you don't have that document
in xml format?

For example, if it was not originally created using xml2rfc.

/Stefan

On 10-03-22 2:58 PM, Julian Reschke julian.resc...@gmx.de wrote:

 On 22.03.2010 22:28, Martin Rex wrote:
 ...
 With xml2rfc 1, 2, 3, 4, 5 and 6 are all seperate, manual and painful
 steps that require all sorts of unspecified other software and require
 you to search around for information and read lots of stuff in order to
 get it working.  The specific tools, their usage and the options to
 automate some steps from the above list varies significantly between OS.
 ...
 
 Hi Martin,
 
 it's clear that you're very happy with nroffedit. I have no problem with
 that.
 
 But you paint a picture of xml2rfc that isn't totally accurate.
 
 There are editors with spell checking.
 
 There are editors with direct preview.
 
 Also, you need two things, exactly as for Nroffedit (xml2rfc.tcl + a TCL
 impl, instead of nroffedit + Java).
 
 There is an alternate impl, rfc2629.xslt, which requires exactly one
 file, and a browser.
 
 ...
 My first encounter with TeX was in 1990 on an Amiga, and it came _with_
 an editor where you could run your document through the TeX processor and
 get it display (DVI graphics output on screen) with a single keypress.
 Not WYSIWYG, but still quite usable.  And comparing the quality of
 the printed output, none of the existing WYSIWYG solutions came even close.
 ...
 
 I liked Tex, too. Although I ran it on that other 68K machine.
 
 In absence of an easy-to-use, single tool (preferably platform-independent)
 to take care of most of the document editing, INCLUDING visualizing
 the output, I see no value in considering an XML-based authoring
 format for I-Ds and RFCs.
 
 Again: use a text editor plus a web browser.
 
 ...
 Even reflowing the existing paginated ASCII output should be fairly
 simple.  What is missing in that output is the information about the
 necessary amount of vertical whitespace between pages when removing
 page breaksheaderfooter lines.
 ...
 
 What's also missing is the information whether text is reflowable or not
 (think ASCII artwork, program listing, tables...).
 
 Best regards, Julian
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Why the normative form of IETF Standards is ASCII

2010-03-24 Thread Stefan Santesson

On 10-03-12 8:34 PM, Julian Reschke julian.resc...@gmx.de wrote:

 Because of the page breaks and the consistent presence of these
 headers and footers just before and after the page breaks, an
 accessibility tool should be able to recognize them as such.
 
 I agree it would be nice if they did that. Do they?

Both NroffEdit and the Idnits tool do. It's not that hard.
Pages are separated by a form feed character.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


NroffEdit updated with December 2009 boilerplate

2010-03-24 Thread Stefan Santesson
Andrew,

You don't need an official template. You just need one that works and passes
ID nits. NroffEdit comes with an nroff template that satisfies the ID nits
check.

I have just updated the NroffEdit tool with a new template that incorporates
the December 2009 boilerplate.

Downloads are available from:
http://aaa-sec.com/nroffedit/nroffedit/download.html

Just the empty nroff template is available here:
http://aaa-sec.com/pub/NroffEdit/empty.nroff

/Stefan


On 10-03-21 4:50 AM, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp
wrote:

 Andrew Sullivan wrote:
 
 I had, in the past year, two different DNSEXT participants send me
 frustrated email because of the idnits checks.  The people in question
 were both long-time contributors to the IETF with perhaps
 ideosyncratic toolchains.  Neither of them was using xml2rfc, and
 neither of them had well-maintained *roff templates that just did the
 right thing.
 
 While I recently spent a few extra hour editing nroff source, a
 solution for those who are less familiar with nroff is to provide
 an official nroff template.
 
 Anyway, it's an issue caused by complex legal requirements and
 the fundamental solution is to loosen the requirements at least
 for IDs.
 
 With regard to the subject, introduction of non-ASCII characters
 will make the matter a lot worse.
 
 Masataka Ohta
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Metadiscussion on changes in draft-ietf-tls-renegotiation

2010-01-29 Thread Stefan Santesson
This makes no sense to me.

Developers tend to live by the rule to be liberal in what you accept as it
tends to generate less interop problems.
It makes no sense to abort a TLS handshake just because it contains an SCSV
if everything else is OK. So This MUST NOT requirement will likely be
ignored by some implementations.

03 gives SCSV somewhat double and conflicting semantics.

1) Present in an initial handshake it signals client support for secure
renegotiation.

2) Present in a renegotiation handshake it signals that the client DOES NOT
support secure renegotiation (Handshake MUST be aborted).

I think this is asking for trouble.

/Stefan


On 10-01-27 8:33 AM, Nikos Mavrogiannopoulos n...@gnutls.org wrote:

 On Wed, Jan 27, 2010 at 1:05 AM, Martin Rex m...@sap.com wrote:
 
 asideThat's been the standard for PKIX RFCs for at least ten years
 (actively acknowledged by WG mmembers), although perhaps its spread
 to other groups should be discouraged./aside
 
 I fully agree.
 
 That may be attributed to the fact that a large part of PKIX is dealing
 with policy issues with the objective to prevent/prohibit interoperability.
 
 On the contrary. I believe allowing the sending of both SCSV and extension
 might harm interoperability instead. Consider the case of most popular client
 implementations are sending both SCSV and extension (it's easier to do so).
 A developer of a server might then consider checking only for SCSV (since all
 of the popular ones he tested with send both). Thus interoperability with less
 popular clients that only send extension stops.
 
 This scenario might not be very likely, but this kind of issues were
 not rare in
 TLS for quite long :)
 
 best regards,
 Nikos
 ___
 TLS mailing list
 t...@ietf.org
 https://www.ietf.org/mailman/listinfo/tls


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Metadiscussion on changes in draft-ietf-tls-renegotiation

2010-01-29 Thread Stefan Santesson
Good points Marsh, but a few comments in line:


On 10-01-29 4:53 PM, Marsh Ray ma...@extendedsubset.com wrote:

 Stefan Santesson wrote:
 This makes no sense to me.
 
 Developers tend to live by the rule to be liberal in what you accept as it
 tends to generate less interop problems.
 
 Not in my experience.
 
 HTML is perhaps the ultimate example of a liberal implementation of a
 spec. To this day, many years later, it's common to find pages that
 render badly with one or more implementations.
 
 Whenever an application is actively being liberal in what it accepts,
 the sending application is in fact relying on undocumented behavior.
 This is what causes the ineterop problems, not strictness.
 
 In practice, if protocol receiving applications are consistently strict
 about what they accept, then mutant protocol sending applications do not
 get out to pollute the ecosystem.
 

I'm just speaking out of personal experience during my 5 years working with
developers of PKI and internet security protocols. It was constant battle of
taking all non compliant implementations into account and to break as few of
them as possible IF it could be done without reducing security.

I could give you a long list of interesting examples :)

The key is though that being liberal is only an option when this does not
hurt security, and that is a tricky balance act.


 It makes no sense to abort a TLS handshake just because it contains an SCSV
 if everything else is OK.
 
 This is a cryptographic data security protocol for which
 interoperability must be secondary to security. If anything is malformed
 or suspicious, the handshake should be aborted.
 

I totally agree in principle. However, if a renegotiating client sends a
full RI extension with valid data AND the client also sends SCSV, then what
security problem is introduced by allowing the renegotiation (i.e. Not
aborting)

No matter how hard I try, I can't find the security problem and I can't find
the interoperability advantage.

Hence, the MUST abort requirement seems like an unmotivated restriction.
I'm not saying that we have to change the current draft, I'm just curious to
understand the real benefits of this requirement.


 So This MUST NOT requirement will likely be
 ignored by some implementations.
 
 They should expect the market to hold them accountable if problems result.
 
 The implementers on this list have indicated they could produce
 interoperable implementations of this spec. And they appear to have
 proven it by demonstrating implementations which actually interoperate.
 

Again, based on what I have seen, I would not be surprised.
I don't think being accountable to the market is a very strong threat if a
safe adjustment to a more liberal processing helps customers to avoid
interop problems while maintaining the security of the protocol.

Hence, if there IS a security threat that motivates this, then it is
extremely important to spell it out in the clear.

 03 gives SCSV somewhat double and conflicting semantics.
 
 1) Present in an initial handshake it signals client support for secure
 renegotiation.
 
 2) Present in a renegotiation handshake it signals that the client DOES NOT
 support secure renegotiation (Handshake MUST be aborted).
 
 I think this is asking for trouble.
 
 Yep, it's an inelegant hack. None of this is how any of us would have
 designed it that way from the beginning.
 

And here we totally agree :)

/Stefan


 I would recommend that clients always send a proper RI extension and
 don't even mess with sending SCSV. Extensions-intolerant servers should
 just go away. Servers just treat SCSV like an empty RI (clearly wrong
 for a renegotiation) without adding much complexity.
 
 As for [Upgraded] Clients which choose to renegotiate [] with an
 un-upgraded server, they deserve whatever they get.
 
 SCSV is just for those implementers that want to make insecure
 connections with old and broken peers. That task has always involved
 added complexity and they know the work they're buying for themselves.
 
 - Marsh


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [TLS] Last Call: draft-ietf-tls-renegotiation (Transport Layer Security (TLS) Renegotiation Indication Extension) to Proposed Standard

2009-12-01 Thread Stefan Santesson
On 09-12-01 12:19 AM, David-Sarah Hopwood david-sa...@jacaranda.org
wrote:

 The IESG wrote:
 The IESG has received a request from the Transport Layer Security WG
 (tls) to consider the following document:
 
 - 'Transport Layer Security (TLS) Renegotiation Indication Extension '
draft-ietf-tls-renegotiation-01.txt as a Proposed Standard
 
 The IESG plans to make a decision in the next few weeks, and solicits
 final comments on this action.  Please send substantive comments to the
 ietf@ietf.org mailing lists by 2009-12-14. Exceptionally,
 comments may be sent to i...@ietf.org instead. In either case, please
 retain the beginning of the Subject line to allow automated sorting.
 
 The file can be obtained via
 http://www.ietf.org/internet-drafts/draft-ietf-tls-renegotiation-01.txt
 
 I believe the decision to take this draft to Last Call was premature, and
 that further discussion in the WG is necessary before deciding whether to
 adopt draft-ietf-tls-renegotiation or an alternative.

I agree to this.

I will support the current draft-ietf-tls-renegotiation-01 if that is choice
of direction in the end of this last call.

However, the TLS working group has also been working hard to evaluate
whether an alternative solution could provide better integration with legacy
deployments and provide better security properties.

This last call was initiated before the WG members had any real chance to
express their choice of direction.

For the sake of completeness, the alternative approach was updated and
posted today and is available from:
http://tools.ietf.org/html/draft-mrex-tls-secure-renegotiation-02

It is my opinion that this draft offers two noticeable advantages over
draft-ietf-tls-renegotiation-01

1) By not sending verify data over the wire, this draft allows peers to
fail-safe as a result of implementation errors in cases where a
corresponding implementation error of draft-ietf-tls-renegotiation-01 leads
to a fail-unsafe situation.

2) It offers a solution where servers never have to parse data from TLS
extensions. This offers advantages when deploying the fix for legacy
implementations.


I would support if the choice of draft-mrex-tls-secure-renegotiation-02 as
the selected solution to the problem, or an update of
draft-ietf-tls-renegotiation-01 which incorporate the updated handshake
message hash calculation of draft-mrex-tls-secure-renegotiation-02 as an
alternative to sending verify data in the RI extensions.

/Stefan
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


NroffEdit updated with new boilerplate

2009-11-11 Thread Stefan Santesson
Short informational notice.

A new update of NroffEdit is available (
http://aaa-sec.com/nroffedit/index.html ), supporting the boilerplate from
the new Trust Legal Provisions from September 2009.

/Stefan
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Current ietf e-mail delays

2009-09-01 Thread Stefan Santesson
I and others have experienced unusually long delays from posting messages to
various ietf mailing lists lately.
4-5 hours delivery time or more is not uncommon.

Anyone else having the same issue or any idea what the problem is?

/Stefan
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


New major release of NroffEdit

2009-08-31 Thread Stefan Santesson
For your information:

A new major release of NroffEdit is available from:
http://aaa-sec.com/nroffedit/index.html

This version provides several features requested by the RFC editor as well
as other IETF:ers. This includes:

- Added background spellchecker
- Color and style formats added to Nroff editing
- Added support for the .tl directive for both nroff-text and
  text- nroff conversion
- Added search and replace feature
- Function for updating all issue and expiration dates

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: New major release of NroffEdit

2009-08-31 Thread Stefan Santesson
Resending as original mail got lost.
Sorry for double posing in case it turns up...

On 09-08-31 7:21 PM, Stefan Santesson ste...@aaa-sec.com wrote:

 For your information:
 
 A new major release of NroffEdit is available from:
 http://aaa-sec.com/nroffedit/index.html
 
 This version provides several features requested by the RFC editor as well as
 other IETF:ers. This includes:
 
 - Added background spellchecker
 - Color and style formats added to Nroff editing
 - Added support for the .tl directive for both nroff-text and
   text- nroff conversion
 - Added search and replace feature
 - Function for updating all issue and expiration dates
 
 /Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2009-07-16 Thread Stefan Santesson
All of this is interesting reading indeed and reveals stuff that I did not
know about.

However, I have never regarded boilerplate issues as something that has
bothered me a lot when writing a draft. These boilerplates are pretty stable
and once you got them in place in the first draft (often by copying them
from another document) you can mostly forget about them. Id-nits will always
scream if you need to update.

What I personally has felt as the major void is an easy way to see your
finished product, not after, but during edit.

I don't want to save my current edit to a file and enter it into some kind
of tool to see what I write. I want to see it while I'm editing. E.g. what
happens if I include a page break here? Does it look better if I add an
extra line or should I keep this section on same page?, etc.

I've learned that there are trivial ways to solve many hard issues like TOC
generation in nroff. Seeing how its done it appears trivial only if you know
how to do it and if you are prepared to expand your knowledge about nroff
quite a bit. And you still won't see the result while you edit.

/Stefan


On 09-07-16 7:15 AM, Julian Reschke julian.resc...@gmx.de wrote:

 Randy Presuhn wrote:
 Hi -
 
 From: Julian Reschke julian.resc...@gmx.de
 To: Randy Presuhn randy_pres...@mindspring.com
 Cc: IETF Discussion Mailing List ietf@ietf.org
 Sent: Wednesday, July 15, 2009 10:13 AM
 Subject: Re: Automatically updated Table of Contents with Nroff
 ...
 And of course you can do that with xml2rfc as well; just automate the
 process of converting the source file to something that can be included
 into the XML source using the standard XML include mechanisms.
 
 
 I couldn't find such mechanisms described anywhere the first time I used
 xml2rfc.
 I just looked at the W3C website XML pages, and still am unable to find them.
 How does one do a simple .so or a #include in XML?
 
 http://xml.resource.org, see under Helpful Hints / Including Files.
 That page also links to http://tools.ietf.org/tools/templates/ with
 templates. Look, for instance, at
 http://tools.ietf.org/tools/templates/template-edu-xml2rfc.xml.
 
 In the XML spec, look for external entities
 (http://www.w3.org/TR/REC-xml/#sec-external-ent).
 
 There are other inclusion mechanisms; xml2rfc supports a processing
 instruction: 
 http://xml.resource.org/authoring/README.html#include.file.facility (I
 don't like this one as it is specific to the xml2rfc processor).
 
 And then, because it's XML, preprocessing is easy; such as with XInclude
 mentioned by Marc Petit-Huguenin.
 
 Hope this helps, Julian
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2009-07-16 Thread Stefan Santesson
I guess you are right about that.

And I'm mostly the same way. Still I find things when I see the finished
product that I want to tweak before submission.

Even if I like the result, I mostly end up doing at least 10 iterations of
Save - compile - open in viewer - check result, before I'm done.

Page breaks is the least of the issues. What is more important is that
sections describing protocol syntax is not all messed up with wrong indent,
line breaks etc.

I guess not everyone appreciate WYSIWYG, but I do :)

/Stefan


On 09-07-16 12:15 PM, Julian Reschke julian.resc...@gmx.de wrote:

 Stefan Santesson wrote:
 ...
 I don't want to save my current edit to a file and enter it into some kind
 of tool to see what I write. I want to see it while I'm editing. E.g. what
 happens if I include a page break here? Does it look better if I add an
 extra line or should I keep this section on same page?, etc.
 ...
 
 I realize that different people have different preferences :-).
 
 As a matter of fact, while editing a draft, page breaks are the least of
 my worries. I do not want to be bothered with them. I do not want to
 fine tune vertical whitespace. Keep in mind that whatever you spend your
 time on with respect to this will be undone by the RFC Editor anyway.
 
 BR, Julian


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2009-07-16 Thread Stefan Santesson
Julian,

For me this is not about nroff versus xml and I'm really not trying to
convince anyone to move away from xml.

I meant to discuss how to do TOC and other formatting for those who like to
edit in nroff.

/Stefan 


On 09-07-16 1:17 PM, Julian Reschke julian.resc...@gmx.de wrote:

 Stefan Santesson wrote:
 I guess you are right about that.
 
 And I'm mostly the same way. Still I find things when I see the finished
 product that I want to tweak before submission.
 
 Even if I like the result, I mostly end up doing at least 10 iterations of
 Save - compile - open in viewer - check result, before I'm done.
 
 Page breaks is the least of the issues. What is more important is that
 sections describing protocol syntax is not all messed up with wrong indent,
 line breaks etc.
 
 I guess not everyone appreciate WYSIWYG, but I do :)
 ...
 
 Did you try previewing the HTML version, by using rfc2629.xslt and just
 pressing F5 in your favorite browser?
 
 BR, Julian


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-pkix-ta-format (Trust Anchor Format) to Proposed Standard

2009-07-15 Thread Stefan Santesson
Carl,

I agree with most of your assessment.

But Yes, there are interoperability issues on the larger scale.
One current example is for example that the European and American Bridging
CA models are incompatible from a technical perspective as the American
model builds on cross certification while the European model builds on
availability of TSLs.

A potential real impact scenario is that products may have to incorporate
both architectures in order to function in both worlds. That is something I
as standards guy would be pleased to avoid.

However, I don't think there is much we neither can or should do to change
this.


On 09-07-15 2:46 AM, Carl Wallace cwall...@cygnacom.com wrote:

 
 TAF works with existing systems that use certificates as trust anchors
 (a certificate is a TrustAnchorChoice object), offers a minor change to
 that practice to allow relying parties to associate constraints with
 certificates using syntax that is widely available (TBSCertificate) and
 offers a minimal representation of trust anchor information for folks
 who require such (TrustAnchorInfo).  I don't see an interoperability
 issue with TAF.  Applications will use the appropriate format that meets
 its needs.  Certificates are not suitable as trust anchors in all cases.
 TAF is a relatively minimal, natural solution to this problem.
 
 
 -Original Message-
 From: Stefan Santesson [mailto:ste...@aaa-sec.com]
 Sent: Tuesday, July 14, 2009 6:42 PM
 To: Carl Wallace; Pope, Nick; ietf@ietf.org; ietf-p...@imc.org
 Subject: Re: Last Call: draft-ietf-pkix-ta-format (Trust Anchor
 Format)
 to Proposed Standard
 
 Carl,
 
 I think the critique of the TSL work is well founded from the
 perspective of
 TAM, but there is nevertheless an important point here.
 
 While TSL might not be an ideal standard for automated trust anchor
 management, very much caused by its mixed scope of fields for both
 human and
 machine consumption, it has despite this become a central component
 for
 efforts in Europe, supported by the EU commission, to provide a common
 framework for trust in CAs in Europe.
 
 There is a substantial risk that we will see two very different
 approaches
 that at least overlap in scope, which may harm interoperability.
 
 /Stefan
 
 
 
 On 09-07-10 1:50 PM, Carl Wallace cwall...@cygnacom.com wrote:
 
 This document has been discussed previously relative to TAF.  A
 portion
 of that discussion is here:
 http://www.imc.org/ietf-pkix/mail-archive/msg05573.html.
 
 
 -Original Message-
 From: owner-ietf-p...@mail.imc.org [mailto:owner-ietf-
 p...@mail.imc.org] On Behalf Of Pope, Nick
 Sent: Friday, July 10, 2009 4:02 AM
 To: 'ietf@ietf.org'; ietf-p...@imc.org
 Subject: RE: Last Call: draft-ietf-pkix-ta-format (Trust Anchor
 Format)
 to Proposed Standard
 
 
 Perhaps the authors should be aware of the existing European
 Technical
 Specification for trust status lists (TS 102 231), which have some
 overlap
 in function with the Trust anchor list in this internet draft.
 
 This is being adopted by all EU member states as a means of
 publishing
 information on CA recognised as trustworthy under the national
 accreditation
 or supervisory schemes.
 
 To obtain a copy go to:
 
 http://pda.etsi.org/pda/queryform.asp
 
 and enter TS 102 231 in the search box.
 
 Nick Pope
 Thales e-Security Ltd
 
 
 
 -Original Message-
 From: owner-ietf-p...@mail.imc.org [mailto:owner-ietf-
 p...@mail.imc.org]
 On Behalf Of The IESG
 Sent: 10 July 2009 01:14
 To: IETF-Announce
 Cc: ietf-p...@imc.org
 Subject: Last Call: draft-ietf-pkix-ta-format (Trust Anchor
 Format)
 to
 Proposed Standard
 
 
 The IESG has received a request from the Public-Key Infrastructure
 (X.509) WG (pkix) to consider the following document:
 
 - 'Trust Anchor Format '
draft-ietf-pkix-ta-format-03.txt as a Proposed Standard
 
 The IESG plans to make a decision in the next few weeks, and
 solicits
 final comments on this action.  Please send substantive comments
 to
 the
 ietf@ietf.org mailing lists by 2009-07-23. Exceptionally,
 comments may be sent to i...@ietf.org instead. In either case,
 please
 retain the beginning of the Subject line to allow automated
 sorting.
 
 The file can be obtained via
 http://www.ietf.org/internet-drafts/draft-ietf-pkix-ta-format-
 03.txt
 
 
 IESG discussion can be tracked via
 
 
 
 
 https://datatracker.ietf.org/public/pidtracker.cgi?command=view_iddTag
 =17
 759rfc_flag=0
 Consider the environment before printing this mail.
 Thales e-Security Limited is incorporated in England and Wales
 with
 company
 registration number 2518805. Its registered office is located at 2
 Dashwood
 Lang Road, The Bourne Business Park, Addlestone, Nr. Weybridge,
 Surrey
 KT15
 2NX.
 The information contained in this e-mail is confidential. It may
 also
 be
 privileged. It is only intended for the stated addressee(s) and
 access
 to it
 by any other person is unauthorised. If you are not an addressee or
 the
 intended addressee, you must

Automatically updated Table of Contents with Nroff

2009-07-14 Thread Stefan Santesson
As I know there are quite some Nroff users still out there, this might be
welcome news.

While I quite like Nroff for its easy to use and readability. one of the
problem that always have annoyed me with Nroff is to manually update the
Table of Content.
This is something where xml2rfc have a great edge, over Nroff, or at least
had.

So the good news is that version 1.0 of NroffEdit includes a feature for an
automatically updated Table of Contents.
All you need to do is to click where you want the TOC and paste it there.
Once you done that it will automatically update as you change name of
titles, change page locations or even add or remove titles.

More information about this is no my web:
http://aaa-sec.com/nroffedit/index.html

I want to thank for the feedback I have received so far.
If you like this or have any feedback of any kind, feel free to let me know.

/Stefan

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-pkix-ta-format (Trust Anchor Format) to Proposed Standard

2009-07-14 Thread Stefan Santesson
Carl,

I think the critique of the TSL work is well founded from the perspective of
TAM, but there is nevertheless an important point here.

While TSL might not be an ideal standard for automated trust anchor
management, very much caused by its mixed scope of fields for both human and
machine consumption, it has despite this become a central component for
efforts in Europe, supported by the EU commission, to provide a common
framework for trust in CAs in Europe.

There is a substantial risk that we will see two very different approaches
that at least overlap in scope, which may harm interoperability.

/Stefan



On 09-07-10 1:50 PM, Carl Wallace cwall...@cygnacom.com wrote:

 This document has been discussed previously relative to TAF.  A portion
 of that discussion is here:
 http://www.imc.org/ietf-pkix/mail-archive/msg05573.html.
 
 
 -Original Message-
 From: owner-ietf-p...@mail.imc.org [mailto:owner-ietf-
 p...@mail.imc.org] On Behalf Of Pope, Nick
 Sent: Friday, July 10, 2009 4:02 AM
 To: 'ietf@ietf.org'; ietf-p...@imc.org
 Subject: RE: Last Call: draft-ietf-pkix-ta-format (Trust Anchor
 Format)
 to Proposed Standard
 
 
 Perhaps the authors should be aware of the existing European Technical
 Specification for trust status lists (TS 102 231), which have some
 overlap
 in function with the Trust anchor list in this internet draft.
 
 This is being adopted by all EU member states as a means of publishing
 information on CA recognised as trustworthy under the national
 accreditation
 or supervisory schemes.
 
 To obtain a copy go to:
 
 http://pda.etsi.org/pda/queryform.asp
 
 and enter TS 102 231 in the search box.
 
 Nick Pope
 Thales e-Security Ltd
 
 
 
 -Original Message-
 From: owner-ietf-p...@mail.imc.org [mailto:owner-ietf-
 p...@mail.imc.org]
 On Behalf Of The IESG
 Sent: 10 July 2009 01:14
 To: IETF-Announce
 Cc: ietf-p...@imc.org
 Subject: Last Call: draft-ietf-pkix-ta-format (Trust Anchor Format)
 to
 Proposed Standard
 
 
 The IESG has received a request from the Public-Key Infrastructure
 (X.509) WG (pkix) to consider the following document:
 
 - 'Trust Anchor Format '
draft-ietf-pkix-ta-format-03.txt as a Proposed Standard
 
 The IESG plans to make a decision in the next few weeks, and
 solicits
 final comments on this action.  Please send substantive comments to
 the
 ietf@ietf.org mailing lists by 2009-07-23. Exceptionally,
 comments may be sent to i...@ietf.org instead. In either case,
 please
 retain the beginning of the Subject line to allow automated sorting.
 
 The file can be obtained via
 http://www.ietf.org/internet-drafts/draft-ietf-pkix-ta-format-03.txt
 
 
 IESG discussion can be tracked via
 
 
 https://datatracker.ietf.org/public/pidtracker.cgi?command=view_iddTag
 =17
 759rfc_flag=0
 Consider the environment before printing this mail.
 Thales e-Security Limited is incorporated in England and Wales with
 company
 registration number 2518805. Its registered office is located at 2
 Dashwood
 Lang Road, The Bourne Business Park, Addlestone, Nr. Weybridge, Surrey
 KT15
 2NX.
 The information contained in this e-mail is confidential. It may also
 be
 privileged. It is only intended for the stated addressee(s) and access
 to it
 by any other person is unauthorised. If you are not an addressee or
 the
 intended addressee, you must not disclose, copy, circulate or in any
 other
 way use or rely on the information contained in this e-mail. Such
 unauthorised use may be unlawful. If you have received this e-mail in
 error
 please delete it (and all copies) from your system, please also inform
 us
 immediately on +44 (0)1844 201800 or email postmas...@thales-
 esecurity.com.
 Commercial matters detailed or referred to in this e-mail are subject
 to a
 written contract signed for and on behalf of Thales e-Security
 Limited.
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-pkix-ta-format (Trust Anchor Format) to Proposed Standard

2009-07-14 Thread Stefan Santesson
Paul,

I just provided information.

I don't think we can do anything. It is not reasonable for IETF to accept
TSL as bases for our work and it is not possible to turn EU around and
abandon TSL.

However, it has a value to be aware of the situation.

/Stefan



On 09-07-15 1:49 AM, Paul Hoffman paul.hoff...@vpnc.org wrote:

 
 At 12:42 AM +0200 7/15/09, Stefan Santesson wrote:
 There is a substantial risk that we will see two very different approaches
 that at least overlap in scope, which may harm interoperability.
 
 And?
 
 Are you proposing that the IETF abandon its efforts because of the EU's? Or
 that the EU abandon its work because of the IETF's? Or that the two dissimilar
 protocols somehow be merged (in a way that would cause less, not greater,
 confusion)? Or something else?
 
 This is IETF Last Call. Please say what you think should happen in the IETF
 context with respect to this document.
 
 --Paul Hoffman, Director
 --VPN Consortium
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Automatically updated Table of Contents with Nroff

2009-07-14 Thread Stefan Santesson
Sounds interesting.

Do you by any chance have a source where this trivial information is
available?

All I have managed to get across are ways to generate a TOC in the end of
the document, that you have to move manually. When doing that move, your
page numbering and formatting may change.

Anyway, the trick/attempt here is to generate a TOC within a WYSIWYG editor
for instant view without ever touching a command line tool, and not having
to index all your titles with macros, but simply through letting the tool
analyze the typical Nroff I-D.

/Stefan


On 09-07-15 1:27 AM, Donald Eastlake d3e...@gmail.com wrote:

 It's trivial to define nroff macros to create a Table of Contents.
 
 Donald
 =
  Donald E. Eastlake 3rd   +1-508-634-2066 (home)
  155 Beaver Street
  Milford, MA 01757 USA
  d3e...@gmail.com
 
 On Tue, Jul 14, 2009 at 4:56 PM, Stefan Santessonste...@aaa-sec.com wrote:
 As I know there are quite some Nroff users still out there, this might be
 welcome news.
 
 While I quite like Nroff for its easy to use and readability. one of the
 problem that always have annoyed me with Nroff is to manually update the
 Table of Content.
 This is something where xml2rfc have a great edge, over Nroff, or at least
 had.
 
 So the good news is that version 1.0 of NroffEdit includes a feature for an
 automatically updated Table of Contents.
 All you need to do is to click where you want the TOC and paste it there.
 Once you done that it will automatically update as you change name of
 titles, change page locations or even add or remove titles.
 
 More information about this is no my web:
 http://aaa-sec.com/nroffedit/index.html
 
 I want to thank for the feedback I have received so far.
 If you like this or have any feedback of any kind, feel free to let me know.
 
 /Stefan
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
 
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [Tools-discuss] Java application for editing nroff formatted Internet Drafts

2009-07-10 Thread Stefan Santesson
Update FYI,

In light of the xml2rfc discussion, I have now updated the NroffEdit tool
(version 0.84) so that it correctly supports nroff content that has been
auto generated by the xml2rfc tool.

/Stefan



On 09-07-05 10:08 PM, Stefan Santesson ste...@aaa-sec.com wrote:

 
 Sorry for the double posting.
 
 The link to the tool fell off. Here it is:
 http://aaa-sec.com/nroffedit/index.html
 
 /Stefan
 
 
 On 09-07-05 9:08 PM, Stefan Santesson ste...@aaa-sec.com wrote:
 
 FYI,
 
 I have just released version 0.8 of NroffEdit (WYSIWYG nroff editor linked
 at http://tools.ietf.org/tools/)
 
 This version is now fully compatible with the IETF nroff template file
 (http://aaa-sec.com/pub/NroffEdit/2-nroff_template.nroff) and correctly
 implements all features that is supported by the RFC editor.
 
 /Stefan
 
 
 
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [Tools-discuss] Java application for editing nroff formatted Internet Drafts

2009-07-05 Thread Stefan Santesson
FYI,

I have just released version 0.8 of NroffEdit (WYSIWYG nroff editor linked
at http://tools.ietf.org/tools/)

This version is now fully compatible with the IETF nroff template file
(http://aaa-sec.com/pub/NroffEdit/2-nroff_template.nroff) and correctly
implements all features that is supported by the RFC editor.

/Stefan





___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [Tools-discuss] Java application for editing nroff formatted Internet Drafts

2009-07-05 Thread Stefan Santesson

Sorry for the double posting.

The link to the tool fell off. Here it is:
http://aaa-sec.com/nroffedit/index.html

/Stefan


On 09-07-05 9:08 PM, Stefan Santesson ste...@aaa-sec.com wrote:

 FYI,
 
 I have just released version 0.8 of NroffEdit (WYSIWYG nroff editor linked
 at http://tools.ietf.org/tools/)
 
 This version is now fully compatible with the IETF nroff template file
 (http://aaa-sec.com/pub/NroffEdit/2-nroff_template.nroff) and correctly
 implements all features that is supported by the RFC editor.
 
 /Stefan
 
 
 
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: XML2RFC must die, was: Re: Two different threads - IETF Document Format

2009-07-05 Thread Stefan Santesson
I also would be against mandating xml2rfc.

I do agree that certain aspects of xml2rfc are convenient, but when it comes
to edit text, I really prefer .nroff


On 09-07-05 8:16 PM, ned+i...@mauve.mrochek.com
ned+i...@mauve.mrochek.com wrote:

 I particularly like the fact that xml2rfc lets me focus on the content of my
 drafts and spend very little time on presentation issues. I don't even want to
 think about how much time I've wasted piddling around with formattting nits
 when using nroff, and don't get me started on LaTeX.
 
 Oh, and as for installing the software, I'm not a TCL fan, but even so I've
 done that on everything from  Mac OS 9 to Solaris to OpenVMS. I sure can't say
 the same for groff.

I hope to have provided a simple way to get around these issues with nroff
with the tool I wrote, as it makes it easy to do the formatting nits and
does not require groff or any other external tools to be installed.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: More liberal draft formatting standards required

2009-07-02 Thread Stefan Santesson
On 09-07-02 1:00 AM, Randy Presuhn randy_pres...@mindspring.com wrote:

 One of the advantages of nroff input is that it *is* human readable.  (To me
 it seems much easier to read than HTML, but that's not the issue here.)
 To generate formatted output (in a variety of possible formats) the freely-
 available groff program works well.

Yes, and that is what I used for many many years until I got tired of saving
to a temporary files and to use command line in endless iterations until I
got all page breaks and other formatting exactly as I wanted them.

I always wanted a tool where I could have the .nroff version and the text
version in two parallel windows, to immediately see the effect of any
changes in .nroff, hence I wrote my own. Mostly for the fun of it. But the
more I use that feature, the more I learned to like it end depend on it.

/Stefan


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: More liberal draft formatting standards required

2009-07-01 Thread Stefan Santesson
To respond to the original question.

For what it is worth, I have written a simple and free tool in java for
editing (and viewing) drafts using nroff, which I find a lot easier and
convenient than XML.

It's available from: http://aaa-sec.com/nroffedit/index.html

Source is available upon request.

/Stefan


On 09-06-28 6:33 PM, Iljitsch van Beijnum iljit...@muada.com wrote:

 Hi,
 
 XML2RFC isn't working for me.
 
 For instance, We are now required to use boilerplate that the
 official version of XML2RFC doesn't recognize so it's necessary to
 use a beta version that is even more undocumented than the regular
 undocumentedness of the official version of XML2RFC. Of course such
 things tend to only surface the day of the cutoff.
 
 I used to write drafts by hand sometimes in the past, but this is also
 very hard, because today's tools just don't have any notion of hard
 line endings, let alone with spaces at the beginning of all lines and
 hard page breaks (at places that make no sense in an A4 world, too).
 
 This is getting worse because the checks done on IDs upon submission
 are getting stricter and stricter.
 
 See 
 http://www.educatedguesswork.org/movabletype/archives/2007/11/curse_you_xml2r.
 html 
   for a long story that I mostly agree with.
 
 As such, I want to see the following:
 
 - the latest boilerplate is published in an easy to copypaste format
 - drafts may omit page breaks
 - drafts may omit indentation and hard line breaks
 - no requirements for reference formats
 
 Note that this is for drafts in general. If the RFC editor wishes to
 impose stricter formatting rules I can live with that.
 
 Please don't reply with helpful hints on how to work with XML2RFC.
 Even with a perfect XML2RFC I would still be forced to create XML,
 which is something I desperately long to avoid.
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: More liberal draft formatting standards required

2009-07-01 Thread Stefan Santesson
How do you translate the .nroff formatted document to a readable text
document?

Can emacs do that for you?

/Stefan



On 09-07-01 9:10 PM, Donald Eastlake d3e...@gmail.com wrote:

 I just do my drafts in nroff and edit with emacs.
 
 Donald
 =
  Donald E. Eastlake 3rd
  d3e...@gmail.com
 
 On Wed, Jul 1, 2009 at 2:22 AM, Stefan Santessonste...@aaa-sec.com wrote:
 To respond to the original question.
 
 For what it is worth, I have written a simple and free tool in java for
 editing (and viewing) drafts using nroff, which I find a lot easier and
 convenient than XML.
 
 It's available from: http://aaa-sec.com/nroffedit/index.html
 
 Source is available upon request.
 
 /Stefan
 
 
 On 09-06-28 6:33 PM, Iljitsch van Beijnum iljit...@muada.com wrote:
 
 Hi,
 
 XML2RFC isn't working for me.
 
 For instance, We are now required to use boilerplate that the
 official version of XML2RFC doesn't recognize so it's necessary to
 use a beta version that is even more undocumented than the regular
 undocumentedness of the official version of XML2RFC. Of course such
 things tend to only surface the day of the cutoff.
 
 I used to write drafts by hand sometimes in the past, but this is also
 very hard, because today's tools just don't have any notion of hard
 line endings, let alone with spaces at the beginning of all lines and
 hard page breaks (at places that make no sense in an A4 world, too).
 
 This is getting worse because the checks done on IDs upon submission
 are getting stricter and stricter.
 
 See
 http://www.educatedguesswork.org/movabletype/archives/2007/11/curse_you_xml2
 r.
 html
   for a long story that I mostly agree with.
 
 As such, I want to see the following:
 
 - the latest boilerplate is published in an easy to copypaste format
 - drafts may omit page breaks
 - drafts may omit indentation and hard line breaks
 - no requirements for reference formats
 
 Note that this is for drafts in general. If the RFC editor wishes to
 impose stricter formatting rules I can live with that.
 
 Please don't reply with helpful hints on how to work with XML2RFC.
 Even with a perfect XML2RFC I would still be forced to create XML,
 which is something I desperately long to avoid.
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Re: [TLS] Re: Last Call: 'TLS User Mapping Extension' to Proposed Standard

2006-03-20 Thread Stefan Santesson
I was made aware of these comments that in some mysterious way didn't
make its way to my inbox. Sorry for the delay.

Comments in-line;

Stefan Santesson
Program Manager, Standards Liaison
Windows Security

Date: Tue, 28 Feb 2006 10:54:35 -0800
From: Wan-Teh Chang [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: ietf@ietf.org
Subject: Re: [TLS] Re: Last Call: 'TLS User Mapping
 Extension'  toProposedStandard

Russ Housley wrote:

We all know that there is not going to be a single name form that is 
useful in all situations.  We also know that you cannot put every 
useful name form into the certificate.  In fact, the appropriate value

can change within the normal lifetime of a certificate, so putting it 
in the certificate will result in high revocation rates.
This is the reason that I believe this TLS extension will be useful in

environments beyond the one that was considered by the Microsoft 
authors.  Your perspective may differ 

I agree.  A rationale like the above would be a good addition to the 
Internet-Draft to strengthen its motivation.


Stefan This could easily be accommodated in the rationale section.


Re: EKR's suggested alternatives

While adding a new HandshakeType will incur the costs of a Standards 
track RFC, we should not go out of our way to avoid it.  The proposed 
solution in the Internet-Draft looks clean and natural to me.

Instead of adding a new HandshakeType, can we extend the Certificate 
message to include the user mapping data list as ancillary data?


Stefan I really don't think we should mess with the certificate
message as it is widely deployed as it is and does not include
extensibility to accommodate name hints.

Re: draft 02

1. Sec. 5 Security Considerations says:

   - The client SHOULD further only send this information
 if the server belongs to a domain to which the client
 intends to authenticate using the UPN as identifier.

I have some questions about this paragraph.

How does the client determine which domain the server belongs to 
without asking the server?


Stefan This must be up to local policy. There are many ways the client
can have or obtain this knowledge. E.g. the client could know this from
the host address it is using for connecting to the server.

Should the server send its domain to the client in the ServerHello 
user-mapping extension?  (This would be analogous to the 
certificate_authorities field in the CertificateRequest message.)


Stefan I don't think there is a need for any new mechanisms here.

Is it possible or common for a client to belong to multiple domains and

have multiple UPNs?  If so, how does a client that belongs to multiple 
domains pick a UPN to send to the server?


Stefan Yes, that is possible and also a typical issue for local policy
on the application layer. The client application, in cooperation with
the user will know what account it is trying to access, this will
according to local policy translate to a UPN. The protocol we define
here only describes how to communicate that UPN, not how you conclude
the UPN for a client. The important fact is that having many UPNs for a
single user is possible using the same certificate. This standard
proposal enables that.

2. It would be good to define at least one other UserMappingType as a 
proof that the framework can be applied to a non-Microsoft environment.


Stefan We can't define a new user mapping type just to have one more.
There has to be a use case with a need for one. The current hint can be
used with a wide set of account names in practically any environment
that use the principles of [EMAIL PROTECTED]

But the extensibility is there in case a new need is there in the
future.
The security AD (Russ) has assisted in developing that part of the
document.

It also worth noting that the RedHat team has publicly said that they
are also considering using this standard with the same name form.

3. If the UserMappingDataList and Certificate messages may be sent in 
either order, it would be good to specify that UserMappingDataList be 
sent after Certificate.
This order would highlight UserMappingDataList's role of providing 
additional information to augment the certificate, and the implicit 
requirement that UserMappingDataList only be used in conjunction with a

client certificate.  (I assume that the server cannot send a 
ServerHello with user-mapping extension without following with a 
CertificateRequest message.)


Stefan No, you need the user mapping before you can validate the
certificate. The server uses that data to locate the information it
needs to successfully verify the certificate sent in the next step. The
current order is therefore more logical.

4. I'd be interested in a use case that elaborates how a server uses 
the UpnDomainHint and the client certificate to locate a user in a 
directory database and what the server can do when the user has been 
located.


Stefan In our current primary use case the UpnDomainHint carries the
name

RE: Re: [TLS] Re: Last Call: 'TLS User Mapping Extension' to Proposed Standard

2006-03-20 Thread Stefan Santesson
Russ,

Thanks for that clarification.
This is what I poorly was trying to communicate.

Stefan Santesson
Program Manager, Standards Liaison
Windows Security


 -Original Message-
 From: Russ Housley [mailto:[EMAIL PROTECTED]
 Sent: den 20 mars 2006 14:09
 To: Stefan Santesson; [EMAIL PROTECTED]; ietf@ietf.org
 Subject: RE: Re: [TLS] Re: Last Call: 'TLS User Mapping Extension' to
 Proposed Standard
 
 I need to add a point of information regarding assisted in the text
 below.  I insisted that the solution support multiple name forms and
 that the solution include a backward compatible mechanism as new name
 forms are registered.  I did offer some guidance during AD Review to
 ensure that these properties were included.
 
 Russ
 
 
 At 01:35 PM 3/20/2006, Stefan Santesson wrote:
 Stefan We can't define a new user mapping type just to have one
more.
 There has to be a use case with a need for one. The current hint can
be
 used with a wide set of account names in practically any environment
 that use the principles of [EMAIL PROTECTED]
 
 But the extensibility is there in case a new need is there in the
 future.
 The security AD (Russ) has assisted in developing that part of the
 document.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: draft-santesson-tls-ume Last Call comment

2006-03-20 Thread Stefan Santesson
I'm not disagreeing with anything in this discussion.

However I don't think we need to address this in the discussed document.
The username in the defined domain hint is an account name and not
necessarily a host name. Name restrictions in this case are thus
governed by user name restrictions for the accessed system.


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


 -Original Message-
 From: Eric A. Hall [mailto:[EMAIL PROTECTED]
 Sent: den 7 mars 2006 21:06
 To: Mark Andrews
 Cc: Kurt D. Zeilenga; ietf@ietf.org
 Subject: Re: draft-santesson-tls-ume Last Call comment
 
 
 On 3/7/2006 8:16 PM, Mark Andrews wrote:
 
  * Hostnames that are 254 and 255 characters long cannot be
  expressed in the DNS.
 
 Actually hostnames are technically defined with a maximum of 63
characters
 in total [RFC1123], and there have been some implementations of
/etc/hosts
 that could not even do that (hence the rule).
 
 But even ignoring that rule (which you shouldn't, if the idea is to
have a
 meaningful data-type), there is also a maximum length limit inherent
in
 SMTP's commands which make the maximum practical mail-domain somewhat
 smaller than the DNS limit. For example, SMTP only requires maximum
 mailbox of 254 octets, but that includes localpart and @ separator.
The
 relationship between these different limits is undefined within SMTP
 specs, but its there if you know about the inheritance.
 
 When it is all said and done, max practical application of mailbox
address
 is 63 chars for localpart, @ separator, 63 chars for domain-part.
 Anything beyond that runs afoul of one or more standards.
 
 /pedantry
 
 --
 Eric A. Hall
http://www.ehsco.com/
 Internet Core Protocols
http://www.oreilly.com/catalog/coreprot/
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: draft-santesson-tls-ume Last Call comment

2006-03-16 Thread Stefan Santesson
I agree,

We should provide better guidance on encoding of the UPN.

This should map with the format of UPN when provided in a certificate.
The reference to the preferred name syntax is thus inherited from RFC
3280. This is how RFC 3280 restricts labels in the dNSName subject alt
name.

I will come back with a proposal on new text later today.


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Mark Andrews [mailto:[EMAIL PROTECTED] 
Sent: den 8 mars 2006 04:23
To: Eric A. Hall
Cc: Kurt D. Zeilenga; ietf@ietf.org
Subject: Re: draft-santesson-tls-ume Last Call comment 


 
 On 3/7/2006 8:16 PM, Mark Andrews wrote:
 
  * Hostnames that are 254 and 255 characters long cannot be
  expressed in the DNS.
 
 Actually hostnames are technically defined with a maximum of 63
characters
 in total [RFC1123], and there have been some implementations of
/etc/hosts
 that could not even do that (hence the rule).

RFC 1123

  Host software MUST handle host names of up to 63 characters and
  SHOULD handle host names of up to 255 characters.

63 is not a maximum.  It is a minumum that must be supported.
 
 But even ignoring that rule (which you shouldn't, if the idea is to
have a
 meaningful data-type), there is also a maximum length limit inherent
in
 SMTP's commands which make the maximum practical mail-domain somewhat
 smaller than the DNS limit. For example, SMTP only requires maximum
 mailbox of 254 octets, but that includes localpart and @ separator.
The
 relationship between these different limits is undefined within SMTP
 specs, but its there if you know about the inheritance.
 
 When it is all said and done, max practical application of mailbox
address
 is 63 chars for localpart, @ separator, 63 chars for domain-part.
 Anything beyond that runs afoul of one or more standards.
 
 /pedantry
 
 -- 
 Eric A. Hall
http://www.ehsco.com/
 Internet Core Protocols
http://www.oreilly.com/catalog/coreprot/
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: [EMAIL PROTECTED]

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [TLS] Re: Last Call: 'TLS User Mapping Extension' to Proposed Standard

2006-02-28 Thread Stefan Santesson
Just to clarify this point.

The text in the introduction where the authors retained change control
was a leftover from the original draft that was intended to go
informational.

We simply forgot to remove this when we changed the scope to
informational.
This is fixed in version 03 of the draft.


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Russ Housley [mailto:[EMAIL PROTECTED] 
Sent: den 19 februari 2006 23:22
To: Bill Fenner; Steven M. Bellovin
Cc: iesg@ietf.org; [EMAIL PROTECTED]; ietf@ietf.org
Subject: [TLS] Re: Last Call: 'TLS User Mapping Extension' to Proposed
Standard

I misunderstood the original question.  I'll get it fixed or withdraw 
the Last Call.

Russ


At 12:38 AM 2/19/2006, Bill Fenner wrote:

 Can we have a Proposed Standard
 without the IETF having change control?

No.  RFC3978 says, in section 5.2 where it describes the derivative
works limitation that's present in draft-santesson-tls-ume, These
notices may not be used with any standards-track document.

   Bill


___
TLS mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/tls

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Last Call: 'TLS User Mapping Extension' to Proposed Standard

2006-02-28 Thread Stefan Santesson
Sorry, managed to hit send button to early.

The IPR statement is available at:
https://datatracker.ietf.org/public/ipr_detail_show.cgi?ipr_id=688


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Stefan Santesson 
Sent: den 28 februari 2006 15:26
To: 'Bill Strahm'; Russ Housley
Cc: Bill Fenner; [EMAIL PROTECTED]; ietf@ietf.org; iesg@ietf.org; Steven M.
Bellovin
Subject: RE: Last Call: 'TLS User Mapping Extension' to Proposed
Standard

This empty appendix was removed in draft 02.

As Russ stated before, an IPR disclosure has been posted to the IETF IPR
page which can be found at:


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Bill Strahm [mailto:[EMAIL PROTECTED] 
Sent: den 20 februari 2006 02:21
To: Russ Housley
Cc: Bill Fenner; [EMAIL PROTECTED]; ietf@ietf.org; iesg@ietf.org; Steven M.
Bellovin
Subject: Re: Last Call: 'TLS User Mapping Extension' to Proposed
Standard

I saw all of the huff, and while I agree with it, I am more concerned
about

Appendix A. IPR Disclosure

TBD

What does that mean, and more specifically is a document with a TBD 
section really ready for last call at all ?

Bill
Russ Housley wrote:
 I misunderstood the original question.  I'll get it fixed or withdraw 
 the Last Call.
 
 Russ
 
 
 At 12:38 AM 2/19/2006, Bill Fenner wrote:
 
 Can we have a Proposed Standard
 without the IETF having change control?

 No.  RFC3978 says, in section 5.2 where it describes the derivative
 works limitation that's present in draft-santesson-tls-ume, These
 notices may not be used with any standards-track document.

   Bill
 
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf
 
 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: [TLS] Re: Last Call: 'TLS User Mapping Extension' toProposedStandard

2006-02-28 Thread Stefan Santesson
Adding to Ari's arguments.
There is one more argument why it would less functional to send the
mapping data in the extension.

The current draft under last call also includes a negotiation mechanism
where the client and server can agree on what type of mapping data they
support.

If the mapping data is sent in the client hello, the client has no clue
on what data the server needs unless prior knowledge has been
established. It must then send all types of mapping data that it
believes the server might need. This is less desirable than sending just
the type of data the server explicitly has stated that it prefers out of
the types the client has stated that it supports.

While it would be technically possible to implement the same solution
along with Eric's alternative suggestions, I don't think it has been
demonstrated that it would provide any significant advantages.


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Ari Medvinsky [mailto:[EMAIL PROTECTED] 
Sent: den 21 februari 2006 02:32
To: Eric Rescorla; ietf@ietf.org
Cc: [EMAIL PROTECTED]; iesg@ietf.org
Subject: RE: [TLS] Re: Last Call: 'TLS User Mapping Extension'
toProposedStandard

Eric,

Thank you for your feedback.  Below are the problems that I see with the
alternative proposals that you suggested when compared to the approach
taken in the current draft:

1)  Proposal 1:  Send the user mapping extension in the initial
Client Hello (as data along with the extension value).

a) Problem: Privacy/Lack of user consent in information
disclosure.  The UPN sent in the user extension, uniquely identifies a
principal - something that users concerned with preserving their privacy
may object to (i.e., the large majority of SSL sites are server side
auth only; the user was anonymous and now the user is identified by the
UPN).

To mitigate this problem to some extent (as I think you pointed out),
the client can be configured to only send the user mapping information
to some subset of servers (e.g., based on domain names).  However, this
completely lacks end user consent.  The user's principal name will be
automatically sent to every SSL server in the above list (as configured
by admin) - even if the user does not wish to be identified to the
target server (in SSL sessions with server side auth only and mutual
auth).

In contrast, if the mapping extension is sent on the third leg before
the certificate msg (as it is in the current draft), the user has the
option to bail out of client auth and not disclose the UPN or any data
in the cert (e.g., by not typing the pin for the smartcard).

b) Problem: Performance.  In your email you stated that the
UserMapping extension is likely to be around ~ 100 bytes and that's
negligible and thus it is ok to send in the initial ClientHello msg -
even if the server does not need it as is the case for server side auth
only TLS connections.

I disagree.  As you know TLS is used in many different scenarios, for
example:  EAP/TLS  (RFC http://www.ietf.org/rfc/rfc2716.txt) is used for
Radius, wireless 802.1x, RAS (dial up), etc.  The MTU in EAP is 1600
bytes.  Thus even 100 bytes can cause additional fragmentation causing
additional latency for RAS (consider dial up across geographically
dispersed end points).  Furthermore, per the current draft the
userMapping structure can be extended, to carry multiple user mapping
hints, thus causes further degradation in this scenario ...

2)  Proposal 2:  
1. Do an ordinary TLS handshake without offering UME.
2. Initiate a TLS rehandshake and send the UMDL in an extension
in the ClientHello.

This is essentially saying that the current extensibility mechanism in
TLS is broken and that the userMapping extension in contrast to all
other extension requires to 2 handshakes instead of one.   The method
that you are proposing is inconsistent with the approach used for every
other TLS extension.  

From a performance standpoint, proposal 2 is still a bad choice.  You
stated that The performance issue is quite modest with modern servers.
While servers are getting faster, user load tends to follow suit;
furthermore, this proposal costs additional latency.  In contrast, the
approach used in the current draft does not suffer from these problems. 

Regards,

-Ari

-Original Message-
From: Eric Rescorla [mailto:[EMAIL PROTECTED] 
Sent: Saturday, February 18, 2006 5:31 PM
To: ietf@ietf.org
Cc: [EMAIL PROTECTED]; iesg@ietf.org
Subject: [TLS] Re: Last Call: 'TLS User Mapping Extension' to
ProposedStandard

BACKGROUND
This document describes an extension to TLS which allows the client to
give the server a hint (the User Principal Name) about which directory
entry to look up in order to verify the client's identity.

This works as follows:
1. The client offers the extension (called UME) in the 
   ClientHello message. 
2. The server accepts the extension.
3. The client sends the new HandshakeType

RE: [TLS] Re: Last Call: 'TLS User Mapping Extension' toProposedStandard

2006-02-28 Thread Stefan Santesson
Eric,

In a general sense, name hints are IDs and IDs are not secrets and no
security system should depend on them being secrets.

However, there might be privacy concerns on where and when you want to
send what ID info to whom. We may e.g. have an issue of aggregate
knowledge. It is always good design to minimize the information sent and
not broadcast them around. ID1 and ID2 might not be a privacy issue when
sent separately to different servers but may be a privacy issue if they
are always sent together.

In the primary use case there is no reason to encrypt the UPN name hint
but if such requirement would arise for another hint type, then the
current model allows a new hint type to be defined which could carry
encrypted information, e.g. encrypted to the public key of the server
certificate.


On the name space issue;
We are nowhere close to exhausting the name space (less than 5% used)
for handshake messages. If we think we will do that in any reasonably
foreseeable future, then we simply have to figure out a way to fix that
problem before it becomes a blocking factor for protocol design. 
There are ways to do that and maintaining backwards compatibility way
before this becomes a problem.


Stefan Santesson
Program Manager, Standards Liaison
Windows Security


-Original Message-
From: Russ Housley [mailto:[EMAIL PROTECTED] 
Sent: den 28 februari 2006 17:19
To: EKR
Cc: Stefan Santesson; Ari Medvinsky; ietf@ietf.org; [EMAIL PROTECTED]
Subject: Re: [TLS] Re: Last Call: 'TLS User Mapping Extension'
toProposedStandard

Eric:

  I can see many situations where the information in this is not
  sensitive.  In fact, in the primary use case, the use mapping
  information is not sensitive.  An enterprise PKI is used in this
  situation, and the TLS extension is used to map the subject name in
  the certificate to the host account name.

But then we're left with the performance rationale that the user has
some semi-infinite number of mappings that makes it impossible to send
all of them and too hard to figure out which one. In light of the fact
that in the original -01 proposal there wasn't even any negotiation
for which type of UME data should be sent, is there any evidence that
this is going to be an important/common case?

This requires a crystal ball  Apparently yours is different than 
mine, as the negotiation that you reference above was added to 
resolve comments from my AD review.

We all know that there is not going to be a single name form that is 
useful in all situations.  We also know that you cannot put every 
useful name form into the certificate.  In fact, the appropriate 
value can change within the normal lifetime of a certificate, so 
putting it in the certificate will result in high revocation rates.

This is the reason that I believe this TLS extension will be useful 
in environments beyond the one that was considered by the Microsoft 
authors.  Your perspective may differ 

Russ


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf