Hi Stephen,
comments inline..
On 3/27/15 10:04 AM, Stephen Farrell wrote:
If/when this list produces charter text that the IESG consider is
ready for review, then that'd be the time at which using the main
IETF list was most useful. (Before that happens, I suspect there'll
be plenty of mail on this list finally followed by a mail asking
for comment on the proposed wg charter that will be sent to the
ietf announce list and copied here.)
Ack.
[...]
I don't agree. The letsencryt.org proposal is only one (proposed)
instance of a CA service, so I think maybe you ended up with the
wrong impression there. The folks working on that are willing to
do the work to produce a protocol that will be able to use any CA
service that wants to talk acme. That all seems like entirely normal
process to me. If we were standardising for a particular service,
I agree that would be problematic, but we are not. If it helps,
I'd block a charter that was only for one particular service.
I had a different impression. In the interest of moving forward the
discussion I am not going to repeat my point. I acknowledge that
this will be taken in consideration. Thanks.
At least one operational CA service (PHB/Commodo) have expressed
interest already and will I'm sure be active on this list, and
more such would of course be welcome.
That is true, however what is proposed and what the expressed interest from
the vendor was are, looking also at the notes, quite different set of
problems.
AFAIK, the proponents do not want to address those in this proposal - maybe
in future work, but not in scope now.
Besides the many issues about an
automated certificate issuance (even for just a DV cert), the choices
made by current Internet CAs (I am referring to Internet CAs because for
corporate or "closed" PKIs automation HAS NEVER BEEN A PROBLEM by using
current standards) are based on POLICY decisions and not technical
merits. Is IETF going to be in the policy decision business instead of
focusing on technical aspects of interoperability?
No. But the fact that only about 30% of the alexa top 1M web sites
even do TLS after nearly 20 years shows that we have failed to
provide certificate management technology that sites have found
usable. The acme protocol is another attempt to address that
I respectfully disagree on this point. First of all I think that 30% is
not a
bad percentage considering that the computing power required for crypto
was not available until recently (definitely not at reasonable costs 20yrs
or even 10yrs ago - considering the growth of Internet).
More importantly, I think that numbers should be interpreted within the
right context. The right question I would ask is: what is the percentage of
websites that would require authentication and confidentiality is not using
TLS ? I would say that the percentages, in this case, would be a lot
different.
The second important argument is to analyze why people are not using
certificates - the response might be more related to having to pay for the
certificates rather than "technical" barriers. This is definitely true
for the
top 1M websites, but it is even more so for personal websites, small blogs,
etc.
For these reasons, I do not think we are using the right numbers here for
justifying the "we have failed" argument.
IMHO, I think that a lot of progress has been made recently (thanks to
"prime time" vulnerabilities and thread awareness due to famous leaks)
about the adoption of certificates and secure ciphers where needed.
Other solutions - e.g. self-signed certificates + certificate pinning could
have been adopted for low-level assurance certificates to reach the 100%
of websites to use TLS (even if it is opportunistic and not tied to
validation),
but current application layers (arguably rightfully so) will not
validate those
TLS connections.
IMHO, definitely ACME will not get us to 100%. Not even close. My opinion
is that the people that are using TLS today will have another set of
tools to
manage their certificates (only for low level of assurance, though) and
people
who do not use TLS today will continue not using it. Automated certificate
tools (both standards-based and non standard-based exist and are deployed
today).
I guess this is something we can check in 2/3 years in case this work is
pushed forward.
important audience via automation and some more "modern" tech. That
said, we've also learned a lot more about what helps things work
in this space and there is now a much increased interest in getting
https to be used, hopefully we learn from that and succeed this
time.
I am sorry, but I completely disagree. There is nothing more "modern" about
what is proposed that has technical advantages over what we have today (as
an extreme example - would you rewrite TCP to use JSON because it is more
"modern" ? This is just a hyperbole, of course, but I guess it might
help getting
the main point across). If I am wrong, please point out the technical
advantages
of the chosen solution, otherwise we must concur that this is
yet-another-format
proposal.
I am not saying that this would stop the work - it is, after all, a
decision from the
community. However, it would be "curious", after acknowledging this
point, to
see the proposal moving forward (promoting competing standards that differ,
basically, only in the message format is quite weird for a
standardization body,
don't you think ?)
Again, we should make it clear if we are ready to retire the existing
standards or
not and, if so, be prepared to justify to the existing deployed base why
they have
to switch to a more limited protocol.
The point that automation intersects with policy is arguable I
agree, but again I don't see any procedural issue here. Those kind
of trade-offs are quite normal as we make changes to, or develop
new, protocols.
I think this statement is a bit dangerous and conflicts with the very
nature of IETF.
I understand the need for trade-offs, but these should be based on technical
considerations. IMHO, other organizations might be more appropriate to
discuss
non-technical tradeoffs (e.g., CA/B Forum?). Having their feedback
before moving
forward in this work could be a good idea.
*Real Scope of ACME.* I think there should be a discussion about where
this work is supposed to land. If it is another attempt (as noted during
the BoF) to push further DANE (even when, as pointed out during the BoF,
there is not much real interest in the real world for it) possibly to
replace work like WebPKI or PKIX protocols, this should be clearly
stated. Also, if that is the case, I think we are potentially choosing a
single-point-of-failure model for trust (DNSSEC) which is scary and
dangerous especially from a privacy perspective considering who is in
control of top-domain keys. Privacy advocates should really be concerned
about this issue.
I think you ended up with the wrong impression here too. What I
heard at the BoF was that the acme proponents were by far most
interested in today's web sites, and today's web PKI, but that
they were not unfriendly to DANE. But even had they been focused
on trying to push the web PKI towards DANE (which is just not the
impression I got), that would be a technical and not a procedural
issue.
It is totally possible that I might have mis-interpreted some of the
comments.
It would be useful to have comments from the acme proponents themselves.
Thanks for your comments.
Looking forward to hear about the technical concerns.
Cheers,
Max
_______________________________________________
Acme mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/acme