Re: [tor-dev] Pluggable transport idea: TLS session resumption
On Wed, 7 Sep 2016 14:24:12 -0700 David Fifieldwrote: > The protocol as just described would be vulnerable to active probing; > the censor could test for servers by sending them garbage session > tickets and seeing how they respond. But that's easy to fix. We can, > for example, let the client and server have a shared secret, and have > the server treat the session ticket as the client part of an obfs4 > handshake--which conveniently resembles a random blob. If the session > ticket doesn't pass obfs4 authentication, then the TLS server can > respond as it naturally would if a client sent an invalid session > ticket; i.e., issue a new ticket and do a full handshake (then send > dummy data, I guess). The server can also honor its own legitimately > issued tickets, but still send dummy data in response. Only clients > who know the shared secret will be able to access the proxy > functionality of the server. Don't use the obfs4 handshake for this (or anything new, really). It's possible to do better. > In order to block such a transport, the censor will have to look at > features other than the server certificate. It could, for example: > * block all session tickets (dunno how costly that would be) That's not really feasible, since the correct behavior is to fall back to the standard handshake, being that this is an optimization. Though this depends on what clients do when the connection process is closed after the ClientHello is sent. > * statefully track which tickets servers have issued, and block >connections that use an unknown ticket. This is probably feasible, particularly by the sort of people that have been looking at ClientHello already anyway. Regards, -- Yawning Angel pgpddkLfzEkmq.pgp Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)
> On 8 Sep 2016, at 01:40, Ivan Markinwrote: > > Hi tor-dev@! > > Moving the discussion on the future of rendinitialpostdelay from ticket > #20082 [1] here. > > Transcribing the issue: >> At the moment descriptor is getting posted at >> MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service >> initialization. For the use case of real-time one-time services >> (like OnionShare, etc) one has to wait for 30 seconds until this >> onion service can be reached. Besides, if a client tries to reach >> the service before its descriptor is ever published, tor client gets >> stuck preventing user from reaching this service after descriptor is >> published. Like this: Could not pick one of the responsible hidden >> service directories to fetch descriptors, because we already tried >> them all unsuccessfully. > > >> It has jumped to 30s from 5s due to "load on authorities". >> 11d89141ac0ae0ff371e8da79abe218474e7365c: >> >> + o Minor bugfixes (hidden services): +- Upload hidden service >> descriptors slightly less often, to reduce + load on >> authorities. >> >> "Load on authorities" is not the point anymore because we don't use >> V0 since 0.2.2.1-alpha. Thus I think it's safe to drop it back to at >> least 5s (3s?) for all services. Or even remove it at all? > > The questions are: > * Can we drop this delay? Why? It doesn't actually gain us anything - we originally added it to prevent load on the authorities, and then we put HS descriptors on HSDirs instead. However, I have concerns that too fast a rate could DoS HSDirs in some circumstances. Even with intro point rebuild limits. > * Can we set it back to 5s thus avoiding issues that can arise after > removing the delay? Let's base the delay on the amount of time it takes for a HS descriptor to stabilise. This is the situation we're trying to prevent: * the HS opens all its intro point circuits * it sends its descriptor * one of the intro points fails * it sends another descriptor If this hardly ever happens in the first 30 seconds, we likely don't need any delay at all. But how could we measure how frequent this is, and how long it takes? > * Should we do something now or postpone it to prop224? It would be nice to have this change in 0.2.9 for Single Onion Services and I think also for HSs with OnionBalance > > [1] https://trac.torproject.org/projects/tor/ticket/20082 > -- > Ivan Markin > ___ > tor-dev mailing list > tor-dev@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev Tim Wilson-Brown (teor) teor2345 at gmail dot com PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n xmpp: teor at torproject dot org signature.asc Description: Message signed with OpenPGP using GPGMail ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Pluggable transport idea: TLS session resumption
Here's an idea for a new pluggable transport. It's just a TLS tunnel, but with a twist that allows the server's certificate to be omitted, depriving the censor of many classification features, such as whether the certificate is signed by a CA, the certificate's lifetime, and whether the commonName matches the server's IP address. I got the idea from ShadowsocksR: https://github.com/breakwa11/shadowsocks-rss/blob/master/ssr.md#2%E6%B7%B7%E6%B7%86%E6%8F%92%E4%BB%B6 The trick that makes it work is RFC 5077 session tickets. How it's supposed to work is, the client makes a TLS connection as usual, and the server sends back a session ticket before beginning the flow of application data. The session ticket is just a random blob from the client's point of view; from the server's point of view it's an authenticated ciphertext that encodes session parameters such as the ciphersuite and master secret. The next time the client connects to the same server, it sends the session ticket along with its ClientHello and the server can skip sending its certificate. Here's a simple way to adapt session tickets for circumvention. The client *always* sends a session ticket when making a connection, even the very first time. The ticket doesn't come from a previous communication with the server; the client generates the ticket itself. The ticket could for instance be a random string--the censor wouldn't be able to distinguish--but we can also use those bits for other purposes. When the server receives the ClientHello, it *pretends* that the ticket was a valid one, and finishes the handshake using hardcoded session parameters, just statically configuring what it would have encoded into a session ticket. The protocol as just described would be vulnerable to active probing; the censor could test for servers by sending them garbage session tickets and seeing how they respond. But that's easy to fix. We can, for example, let the client and server have a shared secret, and have the server treat the session ticket as the client part of an obfs4 handshake--which conveniently resembles a random blob. If the session ticket doesn't pass obfs4 authentication, then the TLS server can respond as it naturally would if a client sent an invalid session ticket; i.e., issue a new ticket and do a full handshake (then send dummy data, I guess). The server can also honor its own legitimately issued tickets, but still send dummy data in response. Only clients who know the shared secret will be able to access the proxy functionality of the server. In order to block such a transport, the censor will have to look at features other than the server certificate. It could, for example: * block all session tickets (dunno how costly that would be) * statefully track which tickets servers have issued, and block connections that use an unknown ticket. * track the fraction of connections that use session tickets on each TLS server. * active-probe the server in order to get a certificate, and then look at certificate features. We'd have to do some research to find out the distribution of session ticket sizes in the wild. (RFC 5077 recommends a specific format: https://tools.ietf.org/html/rfc5077#section-4.) I didn't think of this idea; it comes from ShadowsocksR and its tls1.2_ticket_auth plugin. https://github.com/breakwa11/shadowsocks-rss/blob/master/ssr.md#2%E6%B7%B7%E6%B7%86%E6%8F%92%E4%BB%B6 The documentation is in Chinese. Here is what Google Translate says in English: Analog TLS1.2 handshake at client has a session ticket situation is connected. Currently complete analog implementation, software testing through capture perfectly disguised as TLS1.2. Because there is no ticket so send certificates and other complicated steps, so the firewall can not make a judgment based on the certificate. At the same time it comes with the ability of certain anti-replay attacks. In case of a replay attack is searched end log in to the service, you can grep "replay attack" search, you can use this plug-in found in your area there is no line of interference for TLS. TLS relatively powerless firewall, anti-blocking ability than other plug-ins should be strong, but may also encounter a lot of interference, but the agreement itself will check out any interference, then disconnect encounter interference, to avoid long waits, so that customers end browser or reconnect themselves. This plug-in is compatible with the original agreement (the server is configured to require tls1.2_ticket_auth_compatible), one more than the original protocol handshake causes the connection will take longer, use C # client open automatically reconnect when performed better than the other plug-ins. Support for custom parameters, parameters SNI, it sends the host name field, this function is very similar to the TOR
[tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)
Hi tor-dev@! Moving the discussion on the future of rendinitialpostdelay from ticket #20082 [1] here. Transcribing the issue: > At the moment descriptor is getting posted at > MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service > initialization. For the use case of real-time one-time services > (like OnionShare, etc) one has to wait for 30 seconds until this > onion service can be reached. Besides, if a client tries to reach > the service before its descriptor is ever published, tor client gets > stuck preventing user from reaching this service after descriptor is > published. Like this: Could not pick one of the responsible hidden > service directories to fetch descriptors, because we already tried > them all unsuccessfully. > It has jumped to 30s from 5s due to "load on authorities". > 11d89141ac0ae0ff371e8da79abe218474e7365c: > > + o Minor bugfixes (hidden services): +- Upload hidden service > descriptors slightly less often, to reduce + load on > authorities. > > "Load on authorities" is not the point anymore because we don't use > V0 since 0.2.2.1-alpha. Thus I think it's safe to drop it back to at > least 5s (3s?) for all services. Or even remove it at all? The questions are: * Can we drop this delay? Why? * Can we set it back to 5s thus avoiding issues that can arise after removing the delay? * Should we do something now or postpone it to prop224? [1] https://trac.torproject.org/projects/tor/ticket/20082 -- Ivan Markin ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev