Watson Ladd <[email protected]> wrote
Sat, 27 Sep 2014 10:42:30 -0700:

| The first choice is what to gossip: tree heads and proofs or certs.

What's the reason for including proofs in the gossiping? Is it an
optimisation of network resources? My initial thought has been that
gossiping about anything that's not signed by logs opens up for attacks
against log reputation.


| Both have privacy implications: while certs more directly reveal which
| sites are visited, it's possible that tree heads will be unique per
| cert due to the impact of adding to a tree one cert at a time, so each
| precert will end up showing a different head. This is a tricky issue.

I think that this would have to be handled by stipulating that logs
cannot issue "crazy numbers" of STH's. Numbers get crazy and dangerous
when they come near some number related to the number of unique requests
for STH's. Unique per client, that is. (Imagine a log producing a new
tree head for each STH request and "watermarking" clients by recording
which unique STH was delivered to them, so that when someone come asking
for an inclusion proof for that STH, the log could link a site to a
client.)

If there's going to be any point in this maximum number of STH's issued
per something, it has to be auditable. In the CT case, I can think of
STH's per time unit and STH's per logged certificate.

A new log parameter _minimum_ merge delay could be defined for the per
time solution. The time would have to be chosen based on expected number
of STH requests per time unit.

For the "per new entry" idea, we would also have to add a new maximum
new entries per time unit. Not sure that this is such a great thing for
CT. Might be useful for other types of logs.


| The second choice is what gossip produces. In particular do we push
| consistency proofs towards clients, or pull the heads towards
| monitors? Either approach can work, but the first approach could save
| bandwidth: a consistent head does not need to be reported. The first
| also enables remembering what has been recorded. The second approach
| is simpler for gossipers.

I think consistent heads need to be reported too since that's the only
way to learn about you being the only one seeing a particular, from your
point of view well behaving, log.

I'm reading "consistent head" to mean an STH that checks out correctly
against an older trusted STH and a consistency proof from that point.


| The third choice is how to gossip. Probably the best approach is some
| sort of peer-to-peer network, with the possibility of using other
| servers if the P2P approach doesn't work. However, gossip needs to be
| censorship resistant, which can be a bit tricky. We also want gossip
| to deal with a small fraction of malicious nodes, which means some
| approaches are tricky: in particular nodes can't stop if some nodes
| say "we've heard it before".
| 
| We also need to make sure we don't lose submitted heads, even as old
| heads grow in number. Using consistency proofs to reduce the number of
| heads helps with this problem.
| 
| DHTs may be a workable approach, but I don't know of Byzantine proof
| ones. Flood-fill on random graphs scales if you make it publish only,
| but dealing with old data gets tricky: we want new nodes to learn it,
| without having to send it unnecessarily. Showing the result of an
| audit and backfilling can work, but we need to think about the details
| and try some experiments.

This is a pretty large subject. My initial thoughts are that i) boy are
there ways we could mess this up and ii) we don't want clients to have
to try to talk to other clients directly. Taken together that leads me
to a somewhat defensive model where clients use an already established,
untrusted, peer as a vehicle for conveying their gossip messages. That
doesn't mean one couldn't build a useful protocol on top of that though.

_______________________________________________
Trans mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/trans

Reply via email to