Re: [Acme] Long-lived certificates, but frequently renewed certificates

2021-03-20 Thread Phillip Hallam-Baker
I have two separate answers to these issues.

Answer 1 is to start from a clean sheet of paper and design a PKI that
addresses the needs of IoT devices directly.

Answer 2 is to apply some of the techniques developed for Answer 1 to the
legacy infrastructure.


The biggest single simplification in the Mesh is that certificates do not
have a predefined expiry time and revocation is not considered to be part
of authentication. Revocation is an authorization issue as far as I am
concerned.

To apply the same sort of principle to the WebPKI, I would have a system in
which every device has two certs: A device cert issued during manufacture
bound to a device key with a 30+ year expiry that is bound to some
manufacturer or industry association root and a daily cert issued by a CA
that chains to a root of trust in the browser.

The way I would issue that cert is for the CA to generate a new keypair
every day and send a certificate and private key SHARE to the device,
either directly or by posting an encrypted package to a Web site somewhere.

To make this scheme secure, we use one of the ECDH curves for the keys and
threshold cryptography.

Let the device key pair be {d, d.P} and the daily key shares from the CA be
{c_0, c_0.P}, {c_1, c_1.P}, ...

The CA knows d.P and c_n.P and can calculate x_n.P =  d.P + c_n.P = (d +
c_n).P

The device knows d and is given c_n and so can calculate x_n.

This approach is very fluid and could be integrated into an ACME framework
with little fuss. Instead of validating each cert request, the CA makes a
periodic check. We would also have to tweak TRANS/CT so that instead of
entering every daily cert issued, the CA would only need to log a template
cert on d.P and refresh it when the cert comes up for revalidation. And we
would probably want an extension or two to let relying parties know what is
going on.

That is what I would be proposing at the moment if a CA was paying me to
maintain the WebPKI. Since they are not, I will leave the reification of
this concept in ASN.1 to others.


That approach has real advantages for the existing WebPKI. It could be
slotted in pretty cleanly and would solve a lot of problems. There is no
need for revocation in this scheme. Instead of OCSP stapling, the server
simply downloads the new key share and cert each day.

It is not a good fit for the IoT world because it relies on the systems
always being connected. That approach doesn't work for me living in the
Boston Metro region sitting on a 1Gbs Verizon FiOS Internet pipe. I am
pretty certain it won't work at all for anyone who decides Starlink is the
opportunity for them to realize their lifelong dream of living offgrid. One
day the generator will go out and the Starlink with it. And the owner will
return from a visit to civilization to find that they can't get into the
house because all the certs have expired.

DNS is a really poor match for IoT as well. Very few people own a DNS name
and they are ludicrously expensive and unaffordable as far as at least a
quarter of the world's population is concerned.

The answer for IoT is to build out the PKI and the name space at the same
time. That allows name assignments to be permanent which removes the need
to revalidate cert holders. I have a draft on this proposal which I am
circulating privately for now. A public version should be up in a few weeks
time.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] CAA DNS RR IODEF e-mail spam && client side CAA verification

2020-11-06 Thread Phillip Hallam-Baker
On Thu, Oct 22, 2020 at 1:05 PM Ryan Sleevi  wrote:

>
>
> On Thu, Oct 22, 2020 at 11:59 AM Anton Luka Šijanec 
> wrote:
>
>> Hello!
>>
>>
...


> If such a system is already in place, how can I make sure I am
>> correctly implementing it on my domain? Can someone direct me to the
>> specification?
>>
>> My second suggestion is that client browsers would verify certificates
>> by performing DNS CAA queries and validating if the certificate is okay
>> and optionally send violation reports. What are your thoughts?
>
>
> This is intentionally, and explicitly, not part of CAA. CAA is designed as
> an ACL on issuance; were clients to check, this would necessitate that
> domains continue to authorize future certificate issuance.
>
> For example, if I used CA A at Time 0, but later switched new issuance to
> CA B at Time 10, if clients checked at Time 11, I would be required to list
> both A and B as authorized; A, because it had authorized some of my
> existing certificates, and B, because it was authorized for new
> certificates.
>
> Short of replacing every unexpired, unrevoked certificate from A, I would
> not be able to safely and securely only authorize B going forward.
>
> Section 1 of RFC 8659 makes this unambiguous:
> > Relying Parties MUST NOT use CAA records as part of certificate
> validation.
>

All correct.

But when I originally proposed CAA, we did not have Certificate
Transparency. Nor was DNSSEC deployed. Also, the very first draft of CAA
was joint with Ben Laurie and we were proposing using CAA as a means of
publishing data of that sort.

If the application warranted it, you could establish a check of CAA by
pickling the DNSSEC validation chain in the CT logs (we did consider a cert
extension).
The problem with trying anything of that sort is that PKIX is now patches
on patches on patches and this is a lot of complexity for not very much
advantage.

The WebPKI was originally designed as an authentication and accountability
infrastructure to enable online commerce. It is used for much more but many
of the constraints imposed by that original application make it a really
poor choice for a lot of applications even if Lets Encrypt is giving free
certs.

WebPKI certs are bound to DNS names. WebPKI certs have a fixed expiry.
Neither of those properties is appropriate for the purpose of establishing
a secure conversation with my WiFi router, my coffee pot, etc. etc.

Another limitation on WebPKI is that it is built using 1980s public key
cryptography and the state of the art in crypto is now threshold.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Consensus -- CAA draft to WGLC?

2017-07-10 Thread Phillip Hallam-Baker
On Thu, Jul 6, 2017 at 6:16 AM, Martin Thomson 
wrote:

> On 6 July 2017 at 20:07, Hugo Landau  wrote:
> > Vendor-assigned identifiers could be supported as such:
> >   vnd:example.com/custom-method
>
> RFC 6648 explains why vendor-prefixes can be a bad idea.  I think that
> you should do as Yaron suggested and establish a registry.  Set the
> bar low (specification required would be my choice) and then CABF can
> make a new entry as they see fit.  In particular, do away with
> attaching some sort of semantic to prefixes.
>
> > 2. Adding a reference to CABF is weird
>

​CAA already has a tag registry. The bar to entry is deliberately low. If
you really can't use the existing registry, use the existing one as a
template.

But the thing with DNS records is that you have a left hand side (domain
name, record type) and a right hand side (record data) and the DNS protocol
only allows you to select on the left hand side.

If you do a CAA query, you are always going to get back the full set of
records. ​So you are always going to have to look at them all.

CAA tags are text labels with a deliberately generous number of
possibilities. We are not going to run out. Rather than create a
subordinate registry, I would prefix within the label.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] CAA Account Key Binding Draft Specification

2016-04-25 Thread Phillip Hallam-Baker
Looks good to me.

I think it would also be useful to provide guidance to ACME clients as
to which CA to contact so as to support use of an LRA and/or
transition from one CA to another.

For example, lets say the initial CA was AliceCert.com and the site
decides to use BobCert.com instead. The site has to continue to post
AliceCert.com CAA records until all the issued certs have expired. But
the site wants all servers to make requests for new certs to
BobCert.com.

For any largish certificate deployment such as a house circa 2025 with
a hundred plus connected IoT devices, an LRA type approach is going to
be desirable. The consumer buys a doorbell, they want the doorbell to
use TLS and thus it needs a cert. They want all the certs in their
house to come from the same CA and be managed through one box.

For this particular type of application, we might well be looking at
an entirely new type of cert, something below DV in scope, possibly
not even a WebPKI cert at all. It might be a cert whose status is more
akin to a PGP subkey than what we have dealt with to date.

A CAA record is the natural place to put an entry to say 'go to here
to request a certificate'.


This is similar to the 'sufficient' issue that was discussed but I
don't think the same objection applies. I do not like the 'sufficient'
concept because in my world the CA can always reject a request for any
reason or no reason at all.


On Fri, Apr 22, 2016 at 11:06 PM, Randy Bush  wrote:
>> https://datatracker.ietf.org/doc/draft-landau-acme-caa/
>
> thanks.  seems simple and makes sense.
>
> randy, for adoption
>
> ___
> Acme mailing list
> Acme@ietf.org
> https://www.ietf.org/mailman/listinfo/acme

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] CAA Account Key Binding Draft Specification

2016-04-21 Thread Phillip Hallam-Baker
It is actually very important because those of us who spend their time
looking up patent prior art can't necessarily check the GitHub in 20
years time.

In some of the cases I have been involved in, the plaintiff has quite
literally read posts on an IETF mailing list and turned them into a
patent application.



On Thu, Apr 21, 2016 at 3:45 PM, Hugo Landau  wrote:
> Here we go:
>
> https://datatracker.ietf.org/doc/draft-landau-acme-caa/
>
> Hugo Landau
>
> On Wed, Apr 20, 2016 at 11:50:21AM -0700, Ted Hardie wrote:
>>Any reason not to publish this in the usual way? That will twig some folks
>>who look at the stream of published drafts.
>>
>>Ted
>>On Wed, Apr 20, 2016 at 11:46 AM, Salz, Rich <[1]rs...@akamai.com> wrote:
>>
>>  > Alright, here's the first draft.
>>  >
>>  > [2]https://hlandau.github.io/draft-landau-acme-caa/
>>
>>  Can others in the WG read this and make a suggestion as to yes/no
>>  adopt?  It's short (and the first para of 3.1 is a "did you read this"
>>  sentence fragment test? :)
>>
>>  Thanks!
>>  ___
>>  Acme mailing list
>>  [3]Acme@ietf.org
>>  [4]https://www.ietf.org/mailman/listinfo/acme
>>
>> References
>>
>>Visible links
>>1. mailto:rs...@akamai.com
>>2. https://hlandau.github.io/draft-landau-acme-caa/
>>3. mailto:Acme@ietf.org
>>4. https://www.ietf.org/mailman/listinfo/acme
>
> ___
> Acme mailing list
> Acme@ietf.org
> https://www.ietf.org/mailman/listinfo/acme

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Proposed changes to make use of JSON in layered fashion.

2016-04-21 Thread Phillip Hallam-Baker
On Wed, Apr 20, 2016 at 6:25 PM, Ron <r...@debian.org> wrote:
>
> Hi Phillip,
>
> On Tue, Apr 19, 2016 at 02:51:27PM -0400, Phillip Hallam-Baker wrote:
>> In the meeting, I proposed that we make the use of JSON in ACME
>> something that can be easily shared across multiple Web Services.
>>
>> In a few years time we are quite likely to have multiple JSON Web
>> services that people want to use together. Implementing them as a
>> suite is going to be easiest if there is a consistent style. Some of
>> the encoding choices in ACME right now make implementation in
>> statically typed languages rather less convenient than they could be.
>> In particular it is really difficult to parse data streams when the
>> type descriptor for an object is a field inside the object that can
>> occur last rather than being first on the wire.
>
> I was one of the remote people who thought several of the things you
> said at the meeting had merit and were worth discussing further here,
> so thanks for writing this up.
>
> But I also thought you were talking about something more than what you
> have proposed here now - so could you perhaps clarify a little - did
> some of us read more into what you were saying than you actually had
> in mind, or have you backed away from some of those suggestions after
> further discussion after the meeting?

Yes and no. When I originally got the speaking slot, 'submit to last
call' was not on the agenda. So the proposal I am making is slightly
more modest.

If we were 6 months out from last call then I would absolutely propose
splitting the draft into two documents, a platform document and an
ACME protocol document. In fact I already have such a draft:

https://tools.ietf.org/html/draft-hallambaker-json-web-service-02

But I don't want my proposal to be critical path for last call. And so
what I have done instead is to implement my version of ACME according
to the layered model I describe and work out what the minimal changes
that would be necessary to make ACME consistent.

What I would like to avoid is what hit use with SAML where the people
who insisted on one approach to layering in Version 1.0 discovered it
didn't really work and it forced us to a Version 2.0. Ooops.


> In particular one of the things that I thought seemed important was
> moving _more_ of the protocol into the JSON.  Right now we have a
> bit of a mish-mash where part of the data you need to complete some
> task is in the JSON, and part of it is squirrelled away in the HTTP
> headers.

I am trying to work out where things are happening, right now it looks
as if we have things happening in the HTTP headers that are not
actually covered by the authentication at all. Or that could just be
me misreading the draft, either way it is confusing.


> And there's a few places at least where that creates real
> problems (like where an issued certificate might have more than one
> valid issuer chain, just to pick one that JSHA and I had looked at).
>
> I got the impression you were hoping to make this more transport
> agnostic (which seems like a useful and fairly easy thing to do if
> we are going to change things to the degree you are talking here).

That is still my objective. Though I agree that the first cut might
actually be more of a 'discover bits of the spec that were assumed a
bit too much'.



> I understand the idea of what you are aiming for with this sort of
> change - but the idea of a protocol full of JSON objects that are
> mandated to strictly have only one field worries me a bit.
>
> It seems more like an admission that "JSON is wrong way to encode
> this" (or perhaps "I'm using the wrong tool to decode JSON") than
> a genuine advantage to the encoding.  And I'm not of the opinion
> that JSON is the wrong format to encode this in.

I don't see that as being the case. Rather we have a series of
protocol messages, each protocol message is of a specific type. We
should state the type in JSON. So my ideal would be:

"tag" : { ... object ... }

But it seems to be easier to interface these to JSON support code
wrapped up as an object.

{ "tag" : { ... object ... } }


Now in a future version of the spec, I could well imagine that we
might want to relax that one field requirement so that one
authentication could cover two or more messages or we might have
arrays of objects instead of singletons. So say we were requesting a
thousand device certs at a time, we could do that.

So I am saying that for purposes of ACME/1.0 we only need singleton
requests and responses.


> JSON objects can have multiple fields, and those field are unordered.
> I don't think you can really escape that, and I don't think we can
> reasonably "fix" that, without inventing something that "Isn't JSON".

Or you

[Acme] Proposed changes to make use of JSON in layered fashion.

2016-04-19 Thread Phillip Hallam-Baker
In the meeting, I proposed that we make the use of JSON in ACME
something that can be easily shared across multiple Web Services.

In a few years time we are quite likely to have multiple JSON Web
services that people want to use together. Implementing them as a
suite is going to be easiest if there is a consistent style. Some of
the encoding choices in ACME right now make implementation in
statically typed languages rather less convenient than they could be.
In particular it is really difficult to parse data streams when the
type descriptor for an object is a field inside the object that can
occur last rather than being first on the wire.


Looking at the impact of the changes in the spec, I don't think the
changes should delay things as I think they actually make the document
easier to read. At the moment there are things in sections 5 and 6
that seem in the wrong place. Promoting the "type" field to be the
name of the thing makes section 7 a lot clearer.

I can make the edits myself of course. But thought I would check up on
the nature of the edits first and see if my approach was acceptable.


I see the following areas needing change:


5.3.  Request URI Type Integrity

Reword to specify the mapping of requests onto JSON messages, each
request is a single object consisting of one field whose tag specifies
the request type and whose value is an object containing the request
data.

Move table from here into section 6. Because section 5 should specify
the layering, it should be ACME agnostic. Later on, this is the
section we will pull out and build on as a re-usable element in other
protocols (e.g. LURK).


5.X  Discuss GET vs POST

While we are about it, there should be some grounding in section 5 for
the GET/POST distinction made in section 6.

I think the document as a whole would be cleaner if these were set out
as two options in section 5 then section 6 describes which are
assigned to which.



6.  Certificate Management

One of the things I found very confusing when trying to implement was
that there are tables in 5.3 and 6.1 that seem to have the same
information but obviously don't as one has eight entries and the other
has six.

I think these need to be merged.


Example now becomes:

{ "new-reg"  {
 "contact": [
   "mailto:cert-ad...@example.com;,
   "tel:+12025551212"
 ],
 "agreement": "https://example.com/acme/terms;,
 "authorizations": "https://example.com/acme/reg/1/authz;,
 "certificates": "https://example.com/acme/reg/1/cert;,
   }}


6.1.2 Authorization Objects

   The structure of an ACME authorization resource is a JSON
object with exactly one field that specifies resource type and whose
value is an object containing the resource parameters.

   {
 "status": "valid",
 "expires": "2015-03-01T14:09:00Z",

 "identifier": {
   "dns" : {
  "value": "example.org"}
 },

 "challenges": [
   {
 "http-01" : {
 "status": "valid",
 "validated": "2014-12-01T12:05:00Z",
 "keyAuthorization": "SXQe-2XODaDxNR...vb29HhjjLPSggwiE"
   }}
 ],
   }

7.  Identifier Validation Challenges

Here, just delete the 'type' fields and instead put the type value in
the section headings.

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


[Acme] ACME / LURK consistency

2016-03-19 Thread Phillip Hallam-Baker
As people here are probably aware, there is a BOF for LURK which is to
do with some form of remote keying (maybe).

Since the narrowest scope that might be decided for LURK is supporting
TLS. I think there is going to be a lot of co-development and
co-deployment. So it seems to me that LURK and ACME should look alike
as much as possible and allow as much code re-use between the two.

For me the consistency should ideally result from the same tool set to
build the code.

The proposal I have made for the LURK proposal is fairly
comprehensive. It has a full text specification, a reference section
describing the complete protocol and examples that were produced from
running code:

https://tools.ietf.org/html/draft-hallambaker-lurk-00

That draft was produced in three days, most of which was spent on the
design and writing the prose.

Developing specifications this way isn't just quicker, it is a lot
less frustrating and error prone. It also helps get to implementations
faster.

For obvious reasons, I prefer to write in a modern language (C#). But
the toolset also generates code for C. Adding a new target language
takes about a week and some knowledge of the code synthesis system I
am willing to share.

So this doesn't just make creating the specification easier, it makes
it much easier to get to production code that has to be written in
target languages that match what currently exists.


Which is why the issue of nested vs flat matters to me quite a lot.

Yes, I could rewrite the code synthesizer to support a data format
that allows the type of the data structure to appear as a field
anywhere inside it. But it would mean creating what is essentially a
whole new synthesizer just for ACME. And that would take about as long
as writing a backend for a whole new language. And that work would
then have to be repeated for every other target language.

The nested style imposes fewer development constraints and allows for
implementations that are more efficient in both time and memory
footprint.

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Proposal for http challenge: Lookup SRV records before A/AAAA records

2016-02-10 Thread Phillip Hallam-Baker
Seems to me that we should specify a set of use cases, reduce them to 
requirements and then make sure we have at least one mechanism that covers each 
use case. 
It is not necessary for every use case to be covered by every mechanism. In 
fact that is the point of having multiple mechanisms.

Sent from Outlook Mobile




On Wed, Feb 10, 2016 at 6:27 AM -0800, "Salz, Rich"  wrote:











> This doesn't seem like a great idea. ACME should largely behave the same way
that Web clients do. If you want to muck with DNS just use the DNS challenges.

As an individual, not co-chair, I strongly agree with this.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme





___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Comments on draft-ietf-acme-acme-01

2016-02-03 Thread Phillip Hallam-Baker
I think Terminology and Dependencies should also have a mention of
HTTP / HTTPS, also RFC3339 (time format).

What I would like to get to eventually is some external document
describing a particular style of JOSE /JSON / HTTP / TLS Web service
that can simply be referenced by a Web Service spec. This means:

* We don't keep having to write out the same thing
* We don't have slightly different versions in different specs
* This can be handled by code synthesizers and support libraries.

I did a pass on this for the Mesh:

https://tools.ietf.org/html/draft-hallambaker-json-web-service-01

So far it looks pretty similar to ACME except that I am introducing a
new JOSE serialization format as I am looking at supporting future
apps that can't tolerate the Base64url encoding of the payload (which
will likely be in GB).


In section 5, actual examples of the messages would be nice.

5.1 There is a mention of cross origin but no explanation of what is involved.

This is worrying, what use is being made of HTTP state mechanisms in
ACME? Why is cross origin relevant? So far I have not read anything
that gives a rationale.

I would prefer to decouple ACME from HTTP as far as possible. HTTP is
far to complex to be analyzed at this point. Reinventing the wheel is
precisely what to do when someone is offering you jet propelled roller
skates.

5.2

I start to get lost at this point. The "resource" values look to me
like protocol transaction requests.

I think it is important to be clear on this because at some point we
are going to want to add in request types that are expressly command
like. Commands like "Hello".

The reason I would like to have consistency across JSON/HTTP type
services is that we are going to hit a requirement for 'service
management' type functions at some point. Things like negotiating
protocol version, features, declaring end of service, transfer to
another, scheduled outages, that sort of thing.

That is a module that I would hope we could define once and then
incorporate by reference in many Web Services.


The syntax of the request is currently:

{
  "resource": "new-reg",
  "contact": [ "mailto:cert-ad...@example.com;, ..]
  }

Now that has all the information we want but there is no guarantee
"resource" is going to be emitted first in the serialization. That
could be a problem if you are dealing with abuse cases like very long
messages.

As a structural matter, I prefer to guarantee that the description of
the object precedes the object. So I prefer to describe 'new-reg'
object like so:

{
  "new-reg" : {
"contact": [ "mailto:cert-ad...@example.com;, ..]
}
  }

Again, this is something that doesn't make a huge deal of difference
for ACME message sizes but would be critical if you were sending a 1GB
email.

Where it does give a lot of leverage however is when you are using a
protocol synthesizer like ProtoGen. The parser knows what type of
object it is parsing when it starts on that object. That allows for a
more efficient process than having to first scan for the 'resource'
tag and then construct the object to emit.


5.6: We are going to do CFRG curves, right?

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] DNS challenge spec doesn't support CNAME model

2015-12-17 Thread Phillip Hallam-Baker
The point of adding more validation mechanisms should be to cover more use 
cases.
So examples that http challenge response covers are not too helpful.
I do not see the value of a second type of c/r scheme. Dns is a lousy match for 
c/r. Why are people so set on this?
If you want to use dns, use a signing key to authorize the request. Put the 
fingerprint of the key in the dns 


Sent from Outlook Mobile




On Thu, Dec 17, 2015 at 12:58 PM -0800, "Ted Hardie"  wrote:










On Thu, Dec 17, 2015 at 8:19 AM, Andrew Ayer  wrote:
On Thu, 17 Dec 2015 02:23:20 -0500

Eric Mill  wrote:



> Since DNS specifies that a CNAME can be thought of as an alias[6],

> this means that a service like Tumblr is capable of setting a TXT

> record for domains.tumblr.com with the validation token for

> blog.ericmill.com. A spec-compliant DNS resolver looking for a TXT

> record for blog.ericmill.com should follow the CNAME alias first, and

> then correctly identify the TXT record for domains.tumblr.com as

> applying to blog.ericmill.com.

>

> However, the current ACME spec asks for the record to be set for a

> prefix, not for the requested FQDN. And if I CNAME blog.ericmill.com

> over to Tumblr, Tumblr does not have the ability to set any records

> for a prefix, such as _acme-challenge.blog.ericmill.com. This means

> that services which have users CNAME domains are not able to use DNS

> validation to obtain certificates.

>

> I think that ACME should revisit the DNS specification and avoid

> using a prefix for the TXT validation, to enable this use case.



I disagree, because of a major restriction that DNS places on CNAMEs:

CNAMEs cannot coexist with other record types.  
​Do any of the relevant services support ALIAS or DNAME? 

I'm also kind of wondering how many TXT records would build up at
something like domains.tumblr.com if it was the target for many different
blogs.

regards,

Ted

 If ACME didn't use

prefixes, and you CNAME'd blog.ericmill.com over to Tumblr, you would

lose the ability to yourself complete a DNS challenge for

blog.ericmill.com, since no other record type could coexist with that

CNAME.  This would pose a major problem for users of third-party

services which do support TLS with user-provided certs but don't

implement ACME.



Meanwhile, there is a simple solution that does enable your use case:

Tumblr can ask you to also CNAME _acme-challenge.blog.ericmill.com over

to them.  It's slightly inconvenient to have to provision two CNAMEs

instead of one, but this seems preferable to forcing some users

to choose between CNAMEing to a third-party service and being able to

use ACME themselves.



> Also: I can't think of any changes offhand that would enable Let's

> Encrypt to support a use case where users set an A record to point to

> a third party service, such as for apex domains in the services

> mentioned above. But this is another important use case, especially

> for service providers which don't distinguish between apex and

> non-apex domains in their business offerings.[7] It'd be great to

> hear ideas for how that might be achieved.



Again, just CNAME _acme-challenge over to the third-party service :-)



Regards,

Andrew



___

Acme mailing list

Acme@ietf.org

https://www.ietf.org/mailman/listinfo/acme








___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-16 Thread Phillip Hallam-Baker
On Wed, Dec 16, 2015 at 3:27 PM, Stephen Farrell
 wrote:
>
>
> On 16/12/15 20:11, Michael Wyraz wrote:
>> Stephen,
>>
>> I fear I have no idea what you mean with a "suffix list" and such.
>
> (Caveat: I'm very much an amateur at DNS issues, I hope someone
> else provides a better/more accurate response if one's needed.)
>
> Pretty much all mechanisms of the kind you envisage end up
> requiring a way to allow the "real" authority for a set of
> names to control what happens deeper in the hierarchy. So
> tcd.ie could decide what cs.tcd.ie are allowed to do with
> acme for example. That means you end up needing to know
> roughly where the zone cuts are, which is a hard problem
> in general. The public suffix list is how that's mostly
> done in the web and dbound is (an IETF activity) trying to
> tease apart the various uses of that.
>
> So one of the problems with what you suggest is that the
> "right" place to look for my web servers is two up in the
> hierarchy and not the public suffix and not one up.

No, that isn't what we do for DV certs unless they are wildcard certs.

You are not going to be issuing wildcard certs with this mousetrap
built in this particular way for a long time.

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-16 Thread Phillip Hallam-Baker
On Wed, Dec 16, 2015 at 8:38 AM, Stephen Farrell 
wrote:

>
>
> On 16/12/15 12:20, Julian Dropmann wrote:
> > If they trust you with that, they could just add an ACRE specific SRV
>
> (ACRE? You mean acme I guess.)
>
> > record, and thereby delegate that privilege to create certs to you and
> > everything would be fine.
>
> Yes. They could. But they won't;-)
>
> And as it happens in my own specific case I don't want the ability
> to get a cert for any name in the relevant zone, I only want to be
> able to get and renew certs for the names of my web servers. So the
> semantics of the thing you'd put in DNS (or the thing to which it'd
> point) would likely end up quite complex. It'd end up much like an
> RA is my guess, and I really don't think we want to go there yet
> with acme.


If the requirement is for an RA, you are not going to end up with a simpler
protocol if you insist on giving people umbrellas instead.

The reason the introduction of an RA simplifies the protocol is precisely
because it removes the need to validate the RA for each cert issue. The
RA validates once (a year), is issued a PKI credential and then uses that
to authenticate each request.


Stephen's situation is not typical of Enterprise but it is very typical of
academic
and SOHO environments.

> They were able to do this with the A record too.
> >
> > I just do not find it intuitive in general that by defining any record
> you
> > do this implicitly as a domain owner.
> > At least I personally was not expecting this, but maybe its just me that
> is
> > so stupid.
> >
> > And the question was not about whether you personally are legitimated to
> > create certs for your university, but whether this should be the case for
> > any host of every single zone.
>
> Not "any host" but any host that runs a web server and where the
> entity requesting the cert demonstrates control over that web server.
> I think that is fine in general myself.


It all depends on what your requirements are.

My immediate requirement is to make the certificate warnings 'go away' on
my internal server boxen without me spending money. They are behind a NAT
and there is no way to change that.

The NAS boxes are running a version of Linux but the whole point of getting
a package deal was to avoid maintenance issues. A proposal that requires me
to write a half dozen scripts is not going to be any easier for me to
manage than my current solution of a local CA whose root I install on every
machine when I accept it into the network.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-15 Thread Phillip Hallam-Baker
On Tue, Dec 15, 2015 at 2:41 PM, Noah Kantrowitz <n...@coderanger.net>
wrote:

>
> > On Dec 15, 2015, at 9:48 AM, Phillip Hallam-Baker <ph...@hallambaker.com>
> wrote:
> >
> >
> >
> > On Tue, Dec 15, 2015 at 12:25 PM, Noah Kantrowitz <n...@coderanger.net>
> wrote:
> >
> > > On Dec 15, 2015, at 7:17 AM, Michael Wyraz <mich...@wyraz.de> wrote:
> > >
> > > Stephen,
> > >> Yes, I understand that and didn't actually refer to LE at all in my
> mail.
> > > I'm sorry if I missunderstood you with that.
> > >
> > >> Basically, IMO only after we first get a "now" that works
> > > We have a working HTTP-01 spec, implementation and CA. What's missing
> > > for "a 'now' that works"?
> > >
> > >> Personally the optional thing in which I'm much more interested is a
> > >> simple put-challenge-in-DNS one where the CA pays attention to DNSSEC,
> > >> since that's the use-case I have and that would provide some better
> > >> assurance to the certs acquired via acme. I can see that there might
> > >> also be value for some (other) folks in SRV if it means no need to
> > >> dynamically change DNS. But, if someone is saying "we must all do
> > >> these more complex things for security reasons" then they are, in this
> > >> context, wrong. And my mail was reacting to just such a statement.
> > > Why not just placing a static public key to DNS that is allowed to sign
> > > ACME requests for this domain? Simple, no need for dynamic updates
> (yes,
> > > it's standardized for years but AFAIK not seen very often in real world
> > > scenarios).
> >
> > Anything that makes deployment _harder_ than the current LE client is a
> move in the wrong direction. UX matters, with security more than just about
> anything else. Unless you can propose a user flow to go with this change,
> no amount of hypothetical correctness is worth having a tool no one will
> use.
> >
> > Harder for whom?
> >
> > The current scheme isn't going to work for any geolocation based systems
> and is a terrible fit for enterprise.
>
> I think this is a bit of a red herring on a few fronts. You can use
> http-01 or similar strategies on a widely-replicated system, it is just
> annoying because you need to push the challenge response file to a bunch of
> places. If the geo-distributed piece is a CDN, the system is already
> designed to smash caches effectively so that is handled. Still, that is
> gross and a lot of work, but fortunately there is already a DNS challenge
> in the works that will help for some cases.
>

And is likely to be challenged by the IPR holder.

Keys in the DNS has prior art. It is also rather simpler to implement.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-15 Thread Phillip Hallam-Baker
Here is a handy list

https://cabforum.org/ipr-exclusion-notices/


On Tue, Dec 15, 2015 at 6:24 PM, Richard Barnes <r...@ipv.sx> wrote:

> On Tue, Dec 15, 2015 at 4:17 PM, Phillip Hallam-Baker
> <ph...@hallambaker.com> wrote:
> >
> >
> > On Tue, Dec 15, 2015 at 2:41 PM, Noah Kantrowitz <n...@coderanger.net>
> > wrote:
> >>
> >>
> >> > On Dec 15, 2015, at 9:48 AM, Phillip Hallam-Baker
> >> > <ph...@hallambaker.com> wrote:
> >> >
> >> >
> >> >
> >> > On Tue, Dec 15, 2015 at 12:25 PM, Noah Kantrowitz <
> n...@coderanger.net>
> >> > wrote:
> >> >
> >> > > On Dec 15, 2015, at 7:17 AM, Michael Wyraz <mich...@wyraz.de>
> wrote:
> >> > >
> >> > > Stephen,
> >> > >> Yes, I understand that and didn't actually refer to LE at all in my
> >> > >> mail.
> >> > > I'm sorry if I missunderstood you with that.
> >> > >
> >> > >> Basically, IMO only after we first get a "now" that works
> >> > > We have a working HTTP-01 spec, implementation and CA. What's
> missing
> >> > > for "a 'now' that works"?
> >> > >
> >> > >> Personally the optional thing in which I'm much more interested is
> a
> >> > >> simple put-challenge-in-DNS one where the CA pays attention to
> >> > >> DNSSEC,
> >> > >> since that's the use-case I have and that would provide some better
> >> > >> assurance to the certs acquired via acme. I can see that there
> might
> >> > >> also be value for some (other) folks in SRV if it means no need to
> >> > >> dynamically change DNS. But, if someone is saying "we must all do
> >> > >> these more complex things for security reasons" then they are, in
> >> > >> this
> >> > >> context, wrong. And my mail was reacting to just such a statement.
> >> > > Why not just placing a static public key to DNS that is allowed to
> >> > > sign
> >> > > ACME requests for this domain? Simple, no need for dynamic updates
> >> > > (yes,
> >> > > it's standardized for years but AFAIK not seen very often in real
> >> > > world
> >> > > scenarios).
> >> >
> >> > Anything that makes deployment _harder_ than the current LE client is
> a
> >> > move in the wrong direction. UX matters, with security more than just
> about
> >> > anything else. Unless you can propose a user flow to go with this
> change, no
> >> > amount of hypothetical correctness is worth having a tool no one will
> use.
> >> >
> >> > Harder for whom?
> >> >
> >> > The current scheme isn't going to work for any geolocation based
> systems
> >> > and is a terrible fit for enterprise.
> >>
> >> I think this is a bit of a red herring on a few fronts. You can use
> >> http-01 or similar strategies on a widely-replicated system, it is just
> >> annoying because you need to push the challenge response file to a
> bunch of
> >> places. If the geo-distributed piece is a CDN, the system is already
> >> designed to smash caches effectively so that is handled. Still, that is
> >> gross and a lot of work, but fortunately there is already a DNS
> challenge in
> >> the works that will help for some cases.
> >
> >
> > And is likely to be challenged by the IPR holder.
>
> You've mentioned IPR a couple of times.  If you have knowledge of IPR
> in this space, disclosures would be very helpful.  Same goes for
> anyone else here.
>
> Thanks,
> --Richard
>
>
> >
> > Keys in the DNS has prior art. It is also rather simpler to implement.
> >
> >
> > ___
> > Acme mailing list
> > Acme@ietf.org
> > https://www.ietf.org/mailman/listinfo/acme
> >
>
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-15 Thread Phillip Hallam-Baker
On Tue, Dec 15, 2015 at 9:39 AM, Salz, Rich  wrote:

>
> > There's SRVName from https://tools.ietf.org/html/rfc4985 which in theory
> > already can be applied to https already.  SRVNames are used in the XMPP
> > world a lot, maybe other places as well.
>
> But you can't put a SRVName in a certificate SAN field, can you?


Actually you can. The SRV label is simply a DNS name. That is arguably the
only way that you can legitimately create service specific certs in the
WebPKI.

Port specific certificates are an abomination that must not happen. Well
Known Ports are not a viable discovery technique for modern services and
the idea that they can provide domain separation is utter nonsense. SRV
prefixed domain names do actually provide the necessary separation.

The only objection people would make to SRV is that they would have to
rewrite their application to use SRV for discovery. But I don't see that as
a legitimate concern when the alternative would be having to re-engineer
PKIX and the WebPKI which simply isn't going to happen.

Port numbers are a transport layer attribute and the WebPKI is an
application layer concern.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issuing certificates based on Simple HTTP challenges

2015-12-15 Thread Phillip Hallam-Baker
On Tue, Dec 15, 2015 at 10:08 AM, Kim Alvefur <z...@zash.se> wrote:

> On 2015-12-15 15:55, Phillip Hallam-Baker wrote:
> > On Tue, Dec 15, 2015 at 9:39 AM, Salz, Rich <rs...@akamai.com> wrote:
> >
> >>
> >>> There's SRVName from https://tools.ietf.org/html/rfc4985 which in
> theory
> >>> already can be applied to https already.  SRVNames are used in the XMPP
> >>> world a lot, maybe other places as well.
> >>
> >> But you can't put a SRVName in a certificate SAN field, can you?
> >
> >
> > Actually you can. The SRV label is simply a DNS name. That is arguably
> the
> > only way that you can legitimately create service specific certs in the
> > WebPKI.
>
> Almost, but SRVname has its own OID, so it's not the same as a DNSName.
>  But they live among the other SAN fields.
>

Its worth re-reading the RFC.

https://www.ietf.org/rfc/rfc4985.txt

The reason for introducing a separate OID seems to have been wanting to
drop the protocol component which simplifies things a lot for name
constraints.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


[Acme] Perspective on validation

2015-12-06 Thread Phillip Hallam-Baker
The discussion on validation on different ports suggests that we have the
wrong understanding of what validation is for.

All that is required to validate a certificate holder under the Basic
Requirements is to prove they have control over a domain. This is also the
minimum required.

The port number is irrelevant, either you have control or you don't.


This is even more important when you try to extend ACME to email. Because
then you end up with a hierarchy.

The domain name holder for example.com controls al...@example.com,
b...@example.com, etc. and so they can get a cert for any of them. But Alice
does not control example.com but she does control al...@example.com.

So the domain name holder may be able to get an intermediate CA with
constraints to issue only client certs for *@example.com using DV
validation. Alice, an account holder can only validate for al...@example.com
and can only get an EE cert.


We seem to keep re-opening discussions on this topic as new people join in.

ACME validation is also necessarily constrained to issue for public CAs.
the problem is very different if you are doing a private, internal CA. You
can get much stronger validation, much more easily because you control the
horizontal and the vertical.

ACME is developing a certificate validation and provisioning protocol for
an infrastructure that was originally designed 25 years ago. The basic
principles of the WebPKI were established and fixed in deployed code in
1995. Trying to redefine how that system works twenty years later without a
major requirement driving the change is futile.

X.509 is not tied to a particular layer in the stack. But the WebPKI is
tied to the application layer. Strictly speaking it was conceived as being
the interface between layer 7 and 8. The interface between the Internet and
the 'real' world back in the days before the Internet was the real world.
Port numbers are a transport layer concept.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Server on >= 1024 port

2015-12-02 Thread Phillip Hallam-Baker
On Wed, Dec 2, 2015 at 12:52 PM, Romain Fliedel 
wrote:

> So we might have a record of the form:
>>
>> example.com  CAA  0 acmedv1 "port=666"
>>
>>
> If you have to modify the dns to use a custom port, why not use the dns
> validation method ? (once it's available)
>

Well there is a slight difference. DNS validation is possibly encumbered
for a start.

If by DNS validation you mean 'put the response to the challenge in the
DNS' then that requires a lot more administrative connection to the DNS
than 'put the fingerprint of the validation key in the DNS'
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Server on >= 1024 port

2015-12-02 Thread Phillip Hallam-Baker
On Wed, Dec 2, 2015 at 1:09 PM, Romain Fliedel <romain.flie...@gmail.com>
wrote:

>
>
> 2015-12-02 18:57 GMT+01:00 Phillip Hallam-Baker <ph...@hallambaker.com>:
>
>>
>>
>> On Wed, Dec 2, 2015 at 12:52 PM, Romain Fliedel <romain.flie...@gmail.com
>> > wrote:
>>
>>> So we might have a record of the form:
>>>>
>>>> example.com  CAA  0 acmedv1 "port=666"
>>>>
>>>>
>>> If you have to modify the dns to use a custom port, why not use the dns
>>> validation method ? (once it's available)
>>>
>>
>> Well there is a slight difference. DNS validation is possibly encumbered
>> for a start.
>>
>> If by DNS validation you mean 'put the response to the challenge in the
>> DNS' then that requires a lot more administrative connection to the DNS
>> than 'put the fingerprint of the validation key in the DNS'
>>
>
> There was a discussion about dns validation that was suggesting using the
> account public key hash as the DNS record value.
> Thus it would be a relatively easy to provision the value correct value.
>
>
Well there is prior art on putting keys in the DNS.

The idea of using the DNS as a channel for a challenge-response mechanism
would make me nervous.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Server on >= 1024 port

2015-12-02 Thread Phillip Hallam-Baker
On Wed, Dec 2, 2015 at 4:52 AM, Paul Millar  wrote:

> Hi all,
>
> I'm writing just to summarise this thread and check a consensus has been
> reached.
>
> On 25/11/15 11:13, Paul Millar wrote:
>
>> I was wondering whether people have considered services running on a
>> port other than port 443; in particular, ports greater than 1024.
>>
>
> The decision is not to support unprivileged ports (>= 1024) because of two
> factors:
>
>   1.ACME wishes to support deployments where there are untrusted
> users have (non-root) access to the same machine that
> provides a trusted service.
>
>   2.There is no supported mechanism for a CA to issue a
> certificate that is bound to a specific port.
>
> Removing either of these points would allow (in principal) ACME to support
> issuing certificates to services running on unprivileged ports.
>
> Is that a fair summary?


No.

The problem is that the validation process for the cert has nothing to do
with the port the cert is going to be used on. The purpose of the
validation process is to determine if the request is authorized by the
holder of the domain. It has nothing to do with what host or port the
certificate is going to be used for.

There is a useful rhyme:

Want a cert for HTTP? Validate the request on port 443
Want a cert for SMTP? Validate the request on port 443
Want a cert for NTP? Validate the request on port 443
Want a cert for Any other TP? Validate the request on port 443

The DNS only provides a binding between the domain name and the IP address
and the IP address identifies a Host that is typically shared between
multiple services and the OS can only be assumed to provide disambiguation
between domain names on port 443.

The only way to fix this would be to require navigation through an SRV
record and even that might not be enough.

Also note that this is not an area where IETF consensus is sufficient. The
IETF can publish an RFC describing a protocol that supports a particular
validation process. But that does not mean that the browser providers are
going to accept certs that are issued under that process.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Issue: Allow ports other than 443

2015-11-25 Thread Phillip Hallam-Baker
I am getting really nervous about allowing any port other than 443.

I just did a scan of a very recent clean install of Windows and there are a
*TON* of Web servers running for apps that didn't mention they had one.

The thing is that if I am running a process on any sort of shared host, I
can pretty easily spawn a server and start applying for certs for other
domains. Not only can I get .well-known, I can have any host name I like.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


[Acme] Offsite?

2015-08-13 Thread Phillip Hallam-Baker
Seems to me that we could do with some intensive F2F time to get the spec
right. Like a couple of days working on the validation mechanisms, framing
etc.

I'm not happy that we missed the signature misuse issue.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Signature misuse vulnerability in draft-barnes-acme-04

2015-08-13 Thread Phillip Hallam-Baker
On Thu, Aug 13, 2015 at 3:17 PM, Tony Arcieri basc...@gmail.com wrote:

 On Thu, Aug 13, 2015 at 8:41 AM, Simon Josefsson si...@josefsson.org
 wrote:

 This is not a good discriminator of the CFRG options -- this problem is
 a weakness in this protocol, and should be addressed here.


 I'd agree, this is a conceptual misuse of digital signatures. While
 creating a signature algorithm resistant to this is a neat trick much
 like nonce reuse resistant AEAD schemes, you shouldn't design protocols
 that rely on that resistance in either case.


Old style crypto was to choose between a belt and braces.

New style is to take the belt and the braces and sew the pants to the
bottom of the shirt.

People need to change their attitudes. We are designing building blocks
that are going to be used by pin heads as well as geniuses. And on occasion
the genius is going to build something on a bad day. The harder it is to
screw up, the better.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Supporting off-line (manual) validation

2015-07-27 Thread Phillip Hallam-Baker
On Tue, Jul 28, 2015 at 3:45 AM, Richard Barnes r...@ipv.sx wrote:

 On Mon, Jul 27, 2015 at 7:51 PM, Phillip Hallam-Baker
 ph...@hallambaker.com wrote:
  As a general rule, any protocol that contains a component that may be
  subject to variation in the field needs an IANA registry. Since we are
 going
  to have multiple automatic validation processes we will be required to
 have
  a registry even if there is only one entry at first.

 ACME has always been structured with a registry in mind; the IANA
 considerations just haven't been written up :)


Thats fine, I just wanted to make sure that this hadn't been erased during
the discussion.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] WG meeting at IETF 93

2015-07-06 Thread Phillip Hallam-Baker
Another point I think should be considered on the agenda is how to use JOSE
in the spec.

I think it would be a very good idea to adopt the approach Mike Jones and
myself have been suggesting of using JOSE without base64 armoring for
authenticating requests and responses at the Web Service level.

http://tools.ietf.org/html/draft-jones-jose-jws-signing-input-options-00


I really hope that ACME is not going to be the last JSON based security
spec IETF does and I would really like all the specs to end up with
something approaching a uniform style.



On Tue, Jun 30, 2015 at 4:12 PM, Ted Hardie ted.i...@gmail.com wrote:

 Just to bump this up on people's lists, Rich and I will put up a
 preliminary agenda next Monday.  If you want time for something other than
 draft-barnes-acme, please let us know.

 thanks,

 Ted and Rich

 On Fri, Jun 26, 2015 at 10:54 AM, Ted Hardie ted.i...@gmail.com wrote:

 Howdy,

 As you've seen from the IESG announcement, ACME has been approved as a
 working group, so our meeting in Prague will be as a working group rather
 than a BoF.  The IETF agenda is still tentative, but we're currently
 scheduled for Thursday, July 23rd, 15:20-17:20, in Karlin I/II.  (There is
 still a chance that will change, though, so please do not tailor travel to
 just that time frame!)

 Our charter lists draft-barnes-acme as a starting point, and Rich and I
 are asking the authors to produce an update for the meeting.  We expect
 some of the working group time in Prague to be a document review/discussion
 of that draft.

 If you have other agenda items you'd like to request time for, please
 send them to the list.

 thanks,

 Ted and Rich



 ___
 Acme mailing list
 Acme@ietf.org
 https://www.ietf.org/mailman/listinfo/acme


___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] WG meeting at IETF 93

2015-07-01 Thread Phillip Hallam-Baker
I would like to present OmniPublish which is the protocol I was working on
before ACME came along.

It is not exactly the same as ACME but I think it is important to bear both
approaches in mind because we are going to end up requiring both and I
think they should both work in the same way and be implemented in the same
style.

Consider SMTP and NNTP, they do different things but they do them in the
same way. The protocols are very similar under the covers which made it
easy to write mail/news readers.


ACME is a replacement for the CA interface. The reason I did not propose
doing that was that IETF has tried that on 3 separate occasions without
success to date and W3C has tried it once.

OmniPublish is designed as a meta-protocol that provides client
applications with a one stop shop for all their network configuration and
credential needs.

When a Web Service starts up it needs to have a number of separate
configurations performed:

  * Get a WebPKI cert
  * Get DNS parameters entered
  * Open firewall ports
  * Register contact info in a directory (Jabber, etc)

Traditionally this information has to be hand configured separately. And
this has major consequences for reliability. One of the main reasons I am
skeptical about DANE is that we have three levels of configuration that can
go wrong and today we expect all three to be done by hand, the DNSSEC
config, the DANE entries for the server and the server itself.

The only way I am going to trust that data is if those processes are
automated. Hence the idea behind OmniPublish.




On Tue, Jun 30, 2015 at 4:12 PM, Ted Hardie ted.i...@gmail.com wrote:

 Just to bump this up on people's lists, Rich and I will put up a
 preliminary agenda next Monday.  If you want time for something other than
 draft-barnes-acme, please let us know.

 thanks,

 Ted and Rich

 On Fri, Jun 26, 2015 at 10:54 AM, Ted Hardie ted.i...@gmail.com wrote:

 Howdy,

 As you've seen from the IESG announcement, ACME has been approved as a
 working group, so our meeting in Prague will be as a working group rather
 than a BoF.  The IETF agenda is still tentative, but we're currently
 scheduled for Thursday, July 23rd, 15:20-17:20, in Karlin I/II.  (There is
 still a chance that will change, though, so please do not tailor travel to
 just that time frame!)

 Our charter lists draft-barnes-acme as a starting point, and Rich and I
 are asking the authors to produce an update for the meeting.  We expect
 some of the working group time in Prague to be a document review/discussion
 of that draft.

 If you have other agenda items you'd like to request time for, please
 send them to the list.

 thanks,

 Ted and Rich



 ___
 Acme mailing list
 Acme@ietf.org
 https://www.ietf.org/mailman/listinfo/acme


___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] various issues with the spec

2015-06-14 Thread Phillip Hallam-Baker
As a meta point, this discussion is incomprehensible. The objective is to
produce a comprehensible spec.

We should agree terms of art for the actors, certs etc. and use them
consistently.


On Sun, Jun 14, 2015 at 3:01 AM, Fraser Tweedale fr...@frase.id.au wrote:

 On Sun, Jun 14, 2015 at 08:06:46AM +0200, Stefan Bühler wrote:
  On Sun, 14 Jun 2015 11:17:38 +1000
  Fraser Tweedale fr...@frase.id.au wrote:
 
   On Sun, Jun 14, 2015 at 12:24:32AM +0200, Stefan Bühler wrote:
* dvsni: Please don't require the domain name which is being
validated to be part of subjectAltName; configuring such
certificate might break a working setup in production, when it
wins over an already present and valid certificate for the domain.
   
   No, this certificate is only presented for the host
   `nonce.acme.invalid'.
 
  You are thinking of a setup where you configure explicitly which
  certificate is used for which SNI value. But gnutls for example has a
  nice feature where you can just give it all your certificates, and it
  will pick the matching one automatically.
 
 Do you propose that the certificate *not* bear the domain name being
 validated in *either* the Subject DN or subjectAltName extension?

 This probably does not affect the protocol, but I think is nice to
 include it anyway for the sake of being explicit.  Can you identify
 any existing server software which would is incompatible with ACME
 dvsni due to the validation certificate bearing the name being
 validated?

* dvsni: `The public key is the public key for the key pair being
  authorized`. I hope this was just an accident, this would be
*really* wrong to require.
   
   Why would this be wrong?  Remember that this certificate is
   generated as part of, and intented for use only as part of the
   authorization workflow.  It has no bearing on certificates
   eventually issued for the domain name being authorized.
 
  I don't want my webserver to see my account private key, ever. Am I
  really the first guy to have a problem with that?
 
* dvsni: Don't require it to be a self-signed certificate - what
does it matter who signed it?
   
   It must be signed by the account key as evidence that the entity
   performing the authorization controls the account key.
 
  What exactly is the attack scenario here if this is not checked?
  Person A playing MITM to give control over domain B to account C, and
  account C started the authorization but didn't actually want it to
  succeed?
 
  If you really require such evidence, perhaps it could be required to
  have the certificate signed by the account key instead of being
  self-signed.
 
  regards,
  Stefan

 You've convinced me on these latter points - the certificate should
 be signed by the account key but the key could be a different key
 (i.e. not a self-signed cert) - and this means that the web server
 need not have access to the account private key.

 Cheers,
 Fraser

 ___
 Acme mailing list
 Acme@ietf.org
 https://www.ietf.org/mailman/listinfo/acme

___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Proposed ACME Charter Language

2015-05-14 Thread Phillip Hallam-Baker
On Wed, May 13, 2015 at 7:39 PM, Salz, Rich rs...@akamai.com wrote:

  https://github.com/letsencrypt/acme-spec/issues

 I'd prefer if we just recorded issues there, but discussed them in the
 mailing list.


I would prefer if we avoid getting into practices and policy issues there
as well.

An IETF working group has a finite lifetime and a limited constituency.
Both make it a bad place to decide security policy. We write 'Security
Considerations' not 'Security requirements'.

Validation processes are like algorithms. The IETF can recommend but can't
make a final decision. I think we all agree that it would be a bad thing if
RFC5280 had made SHA-1 support a MUST and that this has in effect been
superseded and this is a good thing.

I don't think we are very likely to be changing crypto algorithms very
frequently in the future. We seem to have a grip on those. But validation
processes seem to me to be something that are not just likely to change, we
would want to keep a watchful eye on.

It isn't even the case that stronger validation mechanisms are necessarily
better or necessarily necessary. We are going to a world where security is
going to be required and insecurity becomes the exception. We are not going
to a world where perfect security is required though. If 'some' security is
required we can get rid of the low assurance security signal (aka padlock
icon) and replace it with a danger signal. for no security.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


[Acme] HTTP/J Draft

2015-02-25 Thread Phillip Hallam-Baker
Earlier on this list we discussed the question of how to use JSON to encode
messages. I think that it would be useful to write this up as a draft.

I don't particularly care what the rules are but I would like to avoid
writing a new set of rules for each protocol that comes along. One of the
main attractions for me in using JSON is that instead of offering five ways
of encoding a data structure like ASN.1 does or fifty like XML does, there
is one way to do most things.

What we don't have convergence on is how to sign JSON data in a protocol
message. Though this has been discussed on the JSON list a bit as well as
here.


Drawing together the earlier discussion points, here are things that I
think there was broad agreement on:

1) The signature covers exactly one data blob which is placed in the
payload portion of a HTTP message.

This means no signed headers, no signed request or response line. Any data
that is significant in the application protocol is carried in the
application content area.

2) The signature data is passed in the HTTP payload section, not as a
header.

This means I have to change my code. But I think it is the right thing to
do. The mixing of content metadata headers and protocol headers is a really
unlovely feature of HTTP. Putting the signature in the payload section
raises no implementation difficulties.

3) The payload section consists of a signature block followed by some
record separator mark followed by the signed data. The signed data is
passed verbatim without base 64 encoding.

4) The signature is described using JSON/JOSE


So a typical message would be something like

/.well-known/acme/Register POST HTTP/1.1
Host: example.com
Content-Length: 230

{ signatures : { signature:
 MRjdkly7_-oTPTS3AXP41iQIGKa80A0ZmTuV5MEaHoxnW2

 e5CZ5NlKtainoFmKZopdHM1O2U4mwzJdQx996ivp83xuglII7PNDi84w
 nB-BDkoBwA78185hX-Es4JIwmDLJK3lfWRa-XtL0RnltuYv746iYTh_q
 HRD68BNt1uSNCrUCTJDt5aAE6x8wW1Kt9eRo4QPocSadnHXFxnt8Is9U
 zpERV0ePPQdLuW3IS_de3xyIrDaLGdjluPxUAhb6L2aXic1U12podGU0
 KLUQSE_oI-ZnmKJ3F4uOZDnd6QZWJushZ41Axf_fcIe8u9ipH84ogore
 e7vjbU5y18kDquDg}


{ Register : { ... Acme Stuff ...}


I see no point in requiring base64 encoding or using any mechanism to allow
signing of HTTP headers. Put all the protocol related data in one place and
stick a signature on the front.

If folk get really upset then maybe we reverse the order and put the
signature blocks at the end so that a chunking/streaming approach can be
easily supported. But this adds quite a bit to implementation complexity
and ACME doesn't need it.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Signed JSON document / Json Content Metaheader / JSON Container

2015-01-28 Thread Phillip Hallam-Baker
On Wed, Jan 28, 2015 at 7:54 PM, Nico Williams n...@cryptonector.com
wrote:

 On Wed, Jan 28, 2015 at 07:50:29PM -0500, Phillip Hallam-Baker wrote:
  On Wed, Jan 28, 2015 at 6:20 PM, Nico Williams n...@cryptonector.com
  wrote:
 
   On Wed, Jan 28, 2015 at 06:11:46PM -0500, Phillip Hallam-Baker wrote:
 OK, but why not put all of this into the headers anyways?
   
Well that is what I suggested in my Content-Signature work and that
 is
exactly how my code works today. But folk proposed introducing the
signature in the HTTP content segment and that forced me to think
 about
which approach is better.
  
   Your approach looks like a Transfer-Encoding to me.  If that's what it
   looks like, and that's what it walks like, [and that's what we want,]
   then that's what it should be.
 
 
  Umm, I designed the Chunked transfer encoding. A TE gives the length of
  blobs. This is not a TE.

 So it's a new MIME type of signed data?


A  new MIME type of JSON wrapped data similar to the rfc822 type.

The content could be encrypted for example or just be the metadata.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme


Re: [Acme] Signed JSON document / Json Content Metaheader / JSON Container

2015-01-28 Thread Phillip Hallam-Baker
On Wed, Jan 28, 2015 at 6:20 PM, Nico Williams n...@cryptonector.com
wrote:

 On Wed, Jan 28, 2015 at 06:11:46PM -0500, Phillip Hallam-Baker wrote:
   OK, but why not put all of this into the headers anyways?
 
  Well that is what I suggested in my Content-Signature work and that is
  exactly how my code works today. But folk proposed introducing the
  signature in the HTTP content segment and that forced me to think about
  which approach is better.

 Your approach looks like a Transfer-Encoding to me.  If that's what it
 looks like, and that's what it walks like, [and that's what we want,]
 then that's what it should be.


Umm, I designed the Chunked transfer encoding. A TE gives the length of
blobs. This is not a TE.




  I think I can make a good architectural argument for the JSON Container
  approach and besides which it is really useful at the file system level.

 You want the contents of the object to include the signature??


No, I want the option of doing drag and drop on the file plus metadata blob
in a fully interoperable fashion. So the contents are still the contents.
But they are prefixed by the metadata.




 I mean, I've wanted filesystems to expose some internal metadata (e.g.,
 a Merkle hash tree root hash for a file's content, with those parameters
 needed to reconstruct the same hash from just the contents, namely, the
 shape of the tree and the size of each non-tail block of contents).  But
 making this part of the contents instead of the metadata strikes me as
 rather problematic.


It isn't making it part of the contents. Metadata is still metadata.

Trying to get the file system to support additional semantics is a losing
proposition. And even if they are supported they won't be standard across
all the platforms.




  Separating out protocol data from content-metadata is long overdue.

 Maybe, but I do want POSIX to continue not commingling contents and
 metadata.


I want POSIX to curl up in the corner and die.

Co-mingling the contents and metadata are an inevitable consequence of
considering files to just be a stream of bits. inside the container is the
only place that metadata can go.

UNIX has had horrid kludges for putting metadata into the content since the
start. Thats what magic numbers are.

 Sort of. That might be where we eventually do the work. But the
  requirements would be ACME in the first instance and the constraints from
  JSON. I would see JOSE as being something we just plop in and should work
  (or else they have some splainin' to do.

 I meant: why not use whatever JOSE delivered?


Base64 encoding the content so as to be able to work out the boundaries.
Blech.
___
Acme mailing list
Acme@ietf.org
https://www.ietf.org/mailman/listinfo/acme