dawuud:
> [..]
>>
>> Indeed no, but I understand now and the comparison was useful, thanks. I 
>> think I was originally thrown off because the term "PKI" brought to my head 
>> X509 and WoT mapping of real-identities-to-keys.
>>
>> If I understand correctly, what you mean here instead is a service to 
>> provide consistency guarantees for the nodes in the network, so that they 
>> have some confidence they're talking to "the right network" and that they're 
>> not getting eclipse- or sybil-attacked. AFAIU the tor consensus system 
>> identifies each node using the node public key, so real-identities are not 
>> relevant. Then it also provides mappings between these keys/ids and their 
>> physical addresses.
> 
> I do not agree that PKI is the wrong word to use. I am using the term
> PKI in the same way that much of the mixnet literature uses it. And
> when working with George, Ania and Claudia we used the PKI term as
> well. [..] I don't know what you mean by "real".
> 

I didn't say PKI is the "wrong" word to use. I *would* say that PKI is not the 
best term to use here, even if it is "established" in the academic literature. 
Sometimes, texts specialised in one topic, uses the same words to mean slightly 
different things, compared to texts that specialise in a different area. But 
it's confusing if you have to deal with both topics at the same time.

In the wider security field, "PKI" is also used for things like X509 and WoT 
and similar. These things do *different security jobs* than 
Directory-Authority-based systems, so I'd prefer not to call both groups "PKI".

By "real [identity]" I mean the abstract idea that I have, in the 
flesh-and-blood computer inside my head, of the remote party that I'm 
communicating with. Crypto protocols deal only with public keys - on the crypto 
level you are not talking with a person or organisation, but with "the 
computational entity that has knowledge of the private key corresponding to 
public key K". So we need a system outside of the crypto, to securely map 
public keys to these "communication entities" we imagine in our heads. Commonly 
this is called "PKI".

Directory authorities perform a different job, so I prefer to not call these 
also "PKI". "Consensus service" would be less confusing - for me as a security 
person but not specialised in anonymity research.

> [..]
> 
> This [MIRANDA] paper is a departure from what we are trying to do
> since they are using the PKI and other mechanisms to defend against
> n-1 attacks whereas the Loopix design uses decoy traffic loops to
> detect n-1 attacks. That having been said I think it's a brilliant
> paper and I'd to implement something like it in the future.
> 
>> [..]
>>
>> Do you know of any papers that quantify the security guarantees around 
>> consensus-based approaches? I'm not aware of any, and it would be good to 
>> read some if they exist. I do know that community-detection-based systems do 
>> quantify their security in terms of probabilities of reaching malicious 
>> nodes, based on various quantified assumptions about the link distribution 
>> of social networks and strengths of social connections. It would also be 
>> good to be able to quantifiably compare the two approaches.
> 
> Good question. I would also be greatful if anyone on this list could
> point us to papers that talk more about the security properties of
> consensus-based PKIs/Directory Authority system.  I don't know of
> any. I don't understand why you think social networks and strengths of
> social connections is relavant... but maybe it is. Really, the voting
> protocol that mixminion and Tor use is a deterministic document
> generation algorithm.
> 

I mentioned social networks because the Miranda paper you linked mentions 
community detection, and those sorts of assumptions and goals are typical, with 
community detection algorithms relating to security in a decentralised network. 
But on a closer reading, I see that it is meant as a secondary improvement to 
the main contribution of the paper, and was not meant as a decentralised 
alternative to a system based on directory authorities.

So yes, they are not relevant for the reasons that I originally thought. 
However, I'm still hopeful to see a decentralised alternative to directory 
authorities, and quantifying security properties would be a good start, to 
either constructing one or disproving any possibility that they can be secure.

>    [MIXMINIONDIRAUTH] Danezis, G., Dingledine, R., Mathewson, N.,
>                       "Type III (Mixminion) Mix Directory Specification",
>                       December 2005, <https://www.mixminion.net/dir-spec.txt>.
> 
>    [TORDIRAUTH]  "Tor directory protocol, version 3",
>                   
> <https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt>.
> 
> I've heard that I2p uses a completely different kind of PKI... involving a
> gossip protocol. I suspect it is highly vulnerable to epistemic attacks which
> is supposed to be one of the main reasons to use a design like Nick's.
> 

After a quick web search on "epistemic attacks", the main paper I can find [1] 
has the result that attacks are very strong if each node only knows about a 
small fraction (n nodes) of the whole network (N nodes).

They lay the motivation for this assumption (n << N), by describing a 
discovery-based p2p network where each node "samples" (i.e. directly contacts) 
a small fraction of the network. This is equating with mere "knowledge" of a 
node, so that the act of "sampling" an attacker-controlled node, gives them (or 
a GPA) the ability to know exactly which nodes "know" the target node.

The paper does not seem to consider the possibility that nodes could discover 
more of the network without directly sampling every node, e.g. via gossip with 
their neighbours on "which other nodes exist".

This does not invalidate the mathematics nor the proofs, but it does invalidate 
the assumption that n << N, that is required to make the attacks be practical. 
So if I2P has some convincing argument that n ~= N for their gossip system, 
then AFAIU they can claim a reasonable level of defense against the attack(s) 
described in this particular paper.

Furthermore, the assumption that nodes must "sample" other nodes in order to 
"know" them, is required for some of the mentioned attacks to work, e.g. in 3.1 
"The adversary need only know the knowledge set of the target S0 for the lower 
bound we have stated to hold". This assumption would also be false for systems 
that involve indirect discovery. (A modified attack could still work, by 
attempting to infer the knowledge-set of S0, but I assume it would cost more 
and be less effective, especially if n ~= N).

(Indirect discovery could arguably be said to make it easier to spoof fake 
identities but your ISP can do that anyway, even in a system that only supports 
"direct" discovery.)

Therefore, I'm not sure if it's correct to discredit fully-decentralised 
systems, based solely or primarily on those attacks. I could be interpreting it 
wrong, and I'm also not well-read in this topic at all. I'd love for further 
expansion upon this point, by anyone that does have more expertise.

X

[1] https://www.freehaven.net/anonbib/cache/danezis-pet2008.pdf
Bridging and Fingerprinting: Epistemic Attacks on Route Selection. George 
Danezis and Paul Syverson.

-- 
GPG: ed25519/56034877E1F87C35
GPG: rsa4096/1318EFAC5FBBDBCE
https://github.com/infinity0/pubkeys.git

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Messaging mailing list
Messaging@moderncrypto.org
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to