On 02/11/2014 04:51 PM, Roger Dingledine wrote: > On Tue, Feb 11, 2014 at 11:55:05AM -0500, Qingping Hou wrote: >> (0) client fetches descriptor for a hidden service. >> (1) client connects to introduction point. >> (2) since client and HS are connected via introduction point, they can >> negotiate a random number using this channel. >> (For more details, see [RAND_NEGO]) >> (3) both client and HS maps that random number to a random onion router >> using the same scheme, so they end up with the same node. >> This is the candidate RP. >> (4) both client and HS create a 3 hops circuit using RP as last hop. >> (5) RP joins the circuit originates from HS to the circuit originates >> from client. >> (6) now client and HS are connected. Because their original circuits >> share the same endpoint(the RP), the length of the path is 5 hops. > > Worth discussing. > >> to the whole process. Firstly, it reuses the connection to >> introduction >> point for both sides so it requires no extra circuits build up. >> Secondly, the bottle neck is circuit setup, cell/stream transmission >> delay is actually pretty low. > > To be clear, the client is the one who learns first what the RP should > be, yes? That means: > > A) The problem George brought up -- the client can keep doing this dance > until they agree upon an RP that the client controls, and now the HS > effectively has a two-hop path to the RP. Maybe that is ok (two is still > more than one), but it should be made clear. > > B) The client should extend a circuit to RP first, establish a rendezvous > cookie there, and only then respond to HS with its R_a and rend cookie? > Otherwise there will be a race where both sides try to extend to RP, and > it's unspecified what happens if HS gets there first. >
No, to make it faster, it should return R_a immediately. This is a symmetric design, so both sides send the rendezvous cookie along the circuit. RP will join the two circuits once it received the same rendezvous cookies. >> Note that at step 2), if HS is able to recover R_a from H(R_a), it can >> take >> control over R_c. So to mitigate this, we can use a variant of >> Diffie-Hellman handshake: >> >> (1) client generates a public key g^x and sends the digest H(g^x) to HS >> (2) HS remembers H(g^x), generates a public key g^y and sends it back >> to the client >> (3) client receives g^y and sends back g^x to HS >> (4) HS checks g^x against H(g^x) to make sure it's not generated after >> client receives g^y. >> (5) Now both client and HS compute a shared random number: R_c = g^(xy) > > You're making both sides do public key operations just because the hash > function might be broken? I would guess the load, and DoS opportunities, > introduced by public key operations on the HS side will outweigh any > theoretical benefits here. Agree. will a large random number be sufficient here? We are working on the prototype ATM. Maybe we can measure this a bit after it's done :) > >> This is where hop negotiation come into play. A negotiated hop is >> guaranteed to be a random node and cannot be determined by anyone. > > For a single run this is true, but for the repeated game it's not. This > might be the killer flaw here. > Right, as we noted at the beginning, this proposal focuses on speeding up HS while not introducing new vulns. Current HSv3 has this problem as well. As I replied in George's mail, the attacker can keep sending introduce cell to HS and thus forcing it to create new circuits, i.e. re-pick the last hop. _______________________________________________ tor-dev mailing list [email protected] https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
