I see lots of great ideas about how to
reinforce/replace/renew/whatever the Internet key management
infrastructure.
My problem is that I really can't predict how these are likely to work
in practice unless we try some of these ideas out in real world
scenarios and with real world people. SK and CT look great, but so did
DigiCash back in the day. The problem with DigiCash in my view was
that it was designed for a perfect fit between the crypto and the
requirements. There was no 'slop' in the system that could be used to
make adjustments. So it was a one shot deal, either it was right first
time or it would never work.
Having been on the Web twenty years I know see requirements as being
dynamic and changing over time, not fixed. So I like to have the
ability to adapt a scheme to address changing requirements.
Looking at all the proposals to date I see some common elements:
* A security policy is defined [approaches vary]
* Information is delivered to the client [approaches vary]
* Client does something [approaches vary]
OK so there is a very wide degree of variation there (!) but wouldn't
it be nice to be able to try out all the different approaches at once
with a single plug in that could support them all? There would be
limits to the security such a plug in could deliver to the end user
but it would still be possible to gain practical experience of what
the consequences of that security would be. I will come back to that a
bit later on.
There are basically two approaches that can be taken to communicating
information to the client
1) Query an online service
2) Transmit the data in-band in TLS or whatever protocol is to be secured.
There are several different ways that we could approach (1). But
Internet transactions already involve three parties (client, server,
DNS resolver) and is pretty clear from the OCSP experience that adding
a fourth is not going to fly. So I see two possible options for the
first approach, either we try to inject the additional data into the
DNS service connection or we define a new protocol that can carry
Trust and Name data together.
My experience of DNS is that the first is a terrible idea. The DNS
protocol is already stretched and there is a huge amount of legacy.
The DNS protocol has to serve two separate purposes, first it is the
protocol for communicating between the name server and the local
server, second between the client and the local server. It is only the
first of these protocols that would require tweakage.
Another reason for not using DNS protocol is that there is
(potentially) a different trust model. Only some of the security
policy statements are coming from the DNS. In Perspectives and
Convergence we have data that is essentially coming from a new trusted
party as well.
Any new online service would have to support a UDP query mode with
some sort of lightweight security. It would have to support transport
of a range of data and there would have to be some mechanism for
backing off to legacy DNS when the new protocol was not available.
So the way I would see this working is that the client would say something like
Query: Policy=XXXX ID=YYYY Name=www.example.com, Port=80, service=http
Where XXXX and YYYY are some bits of previously negotiated data
establishing the context to be used for queries.
The response might then be something like.
* connect to 10.2.3.2 port 80 using HTTP
* connect to 10.2.3.2 port 80 using HTTP
Evidence: DNS record chain
* connect to 10.2.3.2 port 443 using HTTP over TLS v 1.2 and require
cert to be in chain Q
* connect to 10.2.3.2 port 443 using HTTP over TLS v 1.2 and require
cert to be in chain Q
Proof: Signed DNSSEC data chain (using DANE)
* connect to 10.2.3.2 port 443 using HTTP over TLS v 1.2 and require
cert to be in chain Q
Proof: Heuristic CT observation notarized [here]
Obviously a proof is only going to be directly useful to a client if
the client knows how to process it. So people who are looking to move
the policy enforcement point into the client are going to have to do
some plug in coding to get the security benefit. But they can still
test the practicality of their idea in a real world situation and they
can do so without risking a negative impact on the end users.
The plug in would also be capable of pulling the same information up
through TLS or HTTP extensions (as appropriate) so that would suggest
a need for some common format for exchanging the security policy
claims.
Another mode that might be used is a two phase query, only asking for
details if the user asks for more information. Consider the following
scenario:
I get that email from the RPF, I try to access the old name, find it
is blocked as a 'scam', go to the second one, find it is OK. That
would probably be enough for most people. But being a suspicious old
boot, I want to see the basis for the decision so I ask the service to
explain the block and it shows that everything about the site changed
and it is now being hosted in St Petersberg, meanwhile the second site
still accepts the old certificate, demonstrating continuity.
Most of the time that I go to a site and get blocked or see a warning,
I am already a bit suspicious.
--
Website: http://hallambaker.com/
_______________________________________________
therightkey mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/therightkey