I think there are a number of questions
1) Is there a need for a general purpose catenate certificate notary protocol?
2) Is there benefit to using one within a 'right key' solution?
2a) Is there benefit to performing the verification at the client edge?
2b) Should the right key use the same notary infrastructure as other
applications?
2c) Should the 'right key' notary mechanism be a one off or make use
of a general purpose mechanism?
My view is:
1) Absolutely yes.
There is a very clear need for such a protocol to protect digital
evidence in legal case and for photographs taken by journalists, etc.
etc.
2) Probably can't hurt, we have three proposals to do just that
2a) I am really rather dubious that verification at the edge would be
widely implemented. Or rather I think that people would write the
code, deploy the code but never turn on the hard fail option. Call me
a cynic but why should this time be different?
2b) I can't see any special requirement for the 'right key' case
protocol wise. I can't really see right key having the critical mass
necessary to build out its own separate infrastructure and even if it
did try to, I can't see how other purposes could be kept out of it. So
my view is that we should think in terms of there being one notary
infrastructure and any right key use using it.
2c) Given the answer to 2b, I think it is clear that we need one
protocol document - unless we discover otherwise while trying to build
it.
On Fri, Feb 10, 2012 at 5:31 AM, Stephen Farrell
<[email protected]> wrote:
>
> Hi Phill,
>
> Some subset of this does look like the kind of thing the
> IETF could do if there are people interested. And we could
> even do it well, if there are people who'll write code and
> try deploy stuff as the IETF trundles along.
>
> Be good to get a feel for the level of interest in that.
>
> Note, a sequence of +1's or -1's is not a helpful response
> at this stage. Rather, discussion on whether this is a topic
> we should/could/MUST-NOT take on or whatever is what'd help.
> (Or, I guess, silence, which would also be telling:-)
>
> Ta,
> S.
>
>
> On 02/09/2012 03:27 PM, Phillip Hallam-Baker wrote:
>>
>> One component that appears in three of the proposals input to this
>> discussion is an 'append only' notary. While the precise role and
>> implementation of the notary changes there are some common features:
>>
>> * Use of the Harber/Stornetta catenate certificate approach (aka hash
>> chains, Merkle trees etc)
>> * Some notion of 'crowdsourcing' or 'peering' of notaries
>>
>> Note that the original Harber/Stornetta patent has expired. There may
>> be patents still outstanding on specific optimizations but I don't see
>> any of those as being essential.
>>
>>
>> None of the proposals seem to me to be dependent on a particular
>> notary implementation. Some are pretty sketchy when it comes to
>> details. I think this is a feature that could and should be separated
>> out as a separate problem.
>>
>> The uses of a general purpose append-only notary are very significant:
>>
>> * Proof of the date that evidence was collected for legal purposes
>> * Proof that a specific set of contract terms existed on a specific date
>> * Proof that a Web site delivered specific content on a specific date
>>
>>
>> The reason I think these additional purposes matter is that I think
>> that a notary service needs to be done right and will require
>> infrastructure. Specifically, government supported infrastructure. I
>> am very happy with the idea of walking into NIST or the FBI or the UK
>> Home office and putting out a case for the reason why USGov or HMG
>> should invest a $1 million or so on setting up a reference notary for
>> their country. I don't think the same case can be made for the PKI
>> proposals being made.
>>
>> The reason I want to have government notaries in the mix is that a
>> government service provides authority that is recognized and
>> understood by the courts. If Ms Defendant is disputing the digital
>> evidence being presented by the Metropolitan police claiming it was
>> modified after they caught her, the court is going to accept a digital
>> notary stamp that is ultimately validated by a notary service run by
>> HMG much more easily than one just run by Comodo Inc. In the first
>> case the evidence is going to be presumed valid without further
>> consideration, in the second it is highly likely that expert witnesses
>> etc. will be required.
>>
>>
>> Such a notary service would be the online equivalent of a time service
>> and would need to be structured in a similar fashion with 'tiers' of
>> service:
>>
>> Tier 1: Master reference notaries run by national laboratories
>> Tier 2: Service notaries run by commercial entities and universities
>> Tier 3: Enterprise notaries that serve a specific organization
>>
>> Notaries in the top tier would cross-notarize on a regular (e.g. hourly)
>> basis.
>> Notaries in tier 2 would sync with one (or more) tier 1 notaries on a
>> regular basis
>> Notaries in tier 3 would sync with tier 2.
>>
>>
>> I would expect that at least one major university (e.g. CMU, MIT,
>> whatever) would be willing to support the open source community by
>> running an open notary service.
>>
>> At least one entity in the system should be introducing a stream of
>> random data into the notarization stream.
>>
>>
>> Such an infrastructure would provide a means of fixing a digital
>> notarization event between two fixed points in the timeline as
>> follows:
>>
>> First we have to recognize that there is a difference between
>> 'ordinary' notarization in which a user gets a single document stamped
>> and 'meta' notarization which is performed between peers.
>>
>>
>> Ordinary Notarization:
>>
>> Let the document to be notarized be D, the time be t and the current
>> witness value from the chosen tier 1, 2, 3 notaries be V1t, V2t, V3t
>> and the next witness values as V1t', V2t', V3t', the ones after that
>> Vt'', etc.
>>
>> The user submits to the Tier 3 notary, H(D), [identifier of the
>> meta-timelines fixation is requested against]
>> Tier 3 notary replies with a URL and a 'delivery date' (which will be
>> a function of the meta-timelines fixed)
>>
>> After the delivery date (typically an hour in the future) the user can
>> retrieve a proof chain that fixes H(D) with respect to V1t, V2t, V3t
>> and V3t', and either a proof or a reference to a proof fixing V3t'
>> with respect to V2t'' and V2t'' with respect to V1t'''.
>>
>> The protocol could be adapted to support multiple second and first
>> tier notaries, but this is complexity without any real benefit. It
>> does not actually provide any additional security for reasons that
>> will be explained later.
>>
>> The most efficient data structure to use at this level is probably a
>> Merkle tree. Delaying the delivery of the proof means that the notary
>> can even choose the optimal approach after the number of items to be
>> notarized is known.
>>
>>
>> Meta Notarization
>>
>> Meta Notarization is the process of fixing tier 1 notaries against
>> each other. Any notary that engages in the peer notarization is a tier
>> 1 notary by definition. (this may not be a necessary restriction, can
>> come back to that).
>>
>> Tier 1 notaries may participate in one or more meta-timelines. Each
>> meta timeline produces a stream of public witness values that is
>> archived by every member of the timeline. Each meta-timeline
>> incorporates at least one purely random data source.
>>
>> For convenience, all the members of a timeline use the same inputs to
>> the hash function in the same order. The precise mechanism for doing
>> this does not matter so much as that they all arrive at the same
>> result. The simplest implementation would be for one party to act as
>> the meta notary but politics is likely to intervene and require that
>> this function is performed on a rotating basis.
>>
>>
>>
>> Example:
>>
>> Alice wishes to fix document X according to the 'Internet' meta-timeline
>>
>> 1) Alice submits H(X) to her notary
>> 2) Some time later, notary responds with the static proof chain
>>
>> When Alice needs to use the proof to convince Bob she presents the
>> static proof chain and either Alice or Bob pulls the current value of
>> the 'Internet' timeline witness values.
>>
>> The most efficient data structure for this layer is probably to use a
>> skip list. But efficiency 'probably' does not actually matter.
>>
>>
>> The advantage of this structure is that the problem is divided into
>> two, there is a static component that is immediately fixed and a
>> variable component that can be cached to a great degree. If we are
>> thinking of applying this approach to Internet certificates then it is
>> really not unreasonable for every client that chooses to validate this
>> data locally to download a few Kb of meta timeline data every single
>> day.
>>
>>
>> Security analysis
>>
>> One of the interesting features of the system is that the notaries are
>> trustworthy but not trusted. The scope for defection by a notary is
>> very limited.
>>
>> It is desirable to authenticate communications with the notaries but
>> only to prevent a third party performing a denial of service attack by
>> introducing bogus data.
>>
>> Once a notary has delivered the proof and the client has verified that
>> it correctly ties to the meta-timeline, the notary becomes irrelevant.
>> There is nothing that the notary can do to defect. The notary cannot
>> even perform a denial of service attack. The ability of the tier 2, 3
>> notaries to defect is thus bounded in time to the interval between the
>> request being made and the response being verified.
>>
>> The arguments for meta-notaries are a little more complex but again
>> the notary can only defect by refusing service or corrupting
>>
>> Any client relying on verifying notary data is going to have to be
>> capable of adapting to the fact that the chosen meta-timeline might
>> disappear at a future date so the denial of service attack is not
>> particularly worrying.
>>
>>
>> Meta-Timelines can change over time and the protocol needs to be able
>> to cope with this. Specifically a meta-timeline can fork if some
>> parties decide to leave. But this just requires that the successor
>> timelines to agree on clear identifiers so that they do not become
>> ambiguous.
>>
>> Timelines can also merge. All this requires is for the last witness
>> value of one meta timeline to be input to another. Which is the way
>> that a Meta-Timeline should be decommissioned in an orderly fashion.
>> In fact, it is desirable for there to be multiple meta-timelines and
>> for them to fix themselves against each other on a regular basis in
>> case of a sudden and unexpected failure or breach.
>>
>
--
Website: http://hallambaker.com/
_______________________________________________
therightkey mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/therightkey