Re: [tor-dev] Proposal 198: Restore semantics of TLS ClientHello
On 2012-03-20, Nick Mathewson ni...@freehaven.net wrote: Filename: 198-restore-clienthello-semantics.txt Title: Restore semantics of TLS ClientHello Author: Nick Mathewson Created: 19-Mar-2012 Status: Open Overview: Currently, all supported Tor versions try to imitate an older version of Firefox when advertising ciphers in their TLS ClientHello. This feature is intended to make it harder for a censor to distinguish a Tor client from other TLS traffic. Unfortunately, it makes the contents of the ClientHello unreliable: a server cannot conclude that a cipher is really supported by a Tor client simply because it is advertised in the ClientHello. This proposal suggests an approach for restoring sanity to our use of ClientHello, so that we still avoid ciphersuite-based fingerprinting, but allow nodes to negotiate better ciphersuites than they are allowed to negotiate today. Background reading: Section 2 of tor-spec.txt 2 describes our current baroque link negotiation scheme. Proposals 176 and 184 describe more information about how it got that way. Bug 4744 is a big part of the motivation for this proposal: we want to allow Tors to advertise even more ciphers, some of which we would actually prefer to the ones we are using now. What you need to know about the TLS handshake is that the client sends a list of all the ciphersuites that it supports in its ClientHello message, and then the server chooses one and tells the client which one it picked. Motivation and constraints: We'd like to use some of the ECDHE TLS ciphersuites, since they allow us to get better forward-secrecy at lower cost than our current DH-1024 usage. But right now, we can't ever use them, since Tor will advertise them whether or not it has a version of OpenSSL that supports them. (OpenSSL before 1.0.0 did not support ECDHE ciphersuites; OpenSSL before 1.0.0e or so had some security issues with them.) Can Tor detect that it is running with a version of OpenSSL with those security issues and refuse to support the broken ciphersuites? We cannot have the rule be Tors must only advertise ciphersuites that they can use, since current Tors will advertise such ciphersuites anyway. We cannot have the rule be Tors must support every ECDHE ciphersuite on the following list, since current Tors don't do all that, and since one prominent Linux distribution builds OpenSSL without ECC support because of patent/freedom fears. Fortunately, nearly every ciphersuite that we would like to advertise to imitate FF8 (see bug 4744) is currently supported by OpenSSL 1.0.0 and later. This enables the following proposal to work: Proposed spec changes: I propose that the rules for handling ciphersuites at the server side become the following: If the ciphersuites in the ClientHello contains no ciphers other than the following[*], they indicate that the Tor v1 link protocol is in use. TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA If the advertised ciphersuites in the ClientHello are _exactly_[*] the following, they indicate that the Tor v2+ link protocol is in use, AND that the ClientHello may have unsupported ciphers. In this case, the server may choose DHE_RSA_WITH_AES_128_CBC_SHA or DHE_RSA_WITH_AES_256_SHA, but may not choose any other cipher. TLS1_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS1_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS1_DHE_RSA_WITH_AES_256_SHA TLS1_DHE_DSS_WITH_AES_256_SHA TLS1_ECDH_RSA_WITH_AES_256_CBC_SHA TLS1_ECDH_ECDSA_WITH_AES_256_CBC_SHA TLS1_RSA_WITH_AES_256_SHA TLS1_ECDHE_ECDSA_WITH_RC4_128_SHA TLS1_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS1_ECDHE_RSA_WITH_RC4_128_SHA TLS1_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS1_DHE_RSA_WITH_AES_128_SHA TLS1_DHE_DSS_WITH_AES_128_SHA TLS1_ECDH_RSA_WITH_RC4_128_SHA TLS1_ECDH_RSA_WITH_AES_128_CBC_SHA TLS1_ECDH_ECDSA_WITH_RC4_128_SHA TLS1_ECDH_ECDSA_WITH_AES_128_CBC_SHA SSL3_RSA_RC4_128_MD5 SSL3_RSA_RC4_128_SHA TLS1_RSA_WITH_AES_128_SHA TLS1_ECDHE_ECDSA_WITH_DES_192_CBC3_SHA TLS1_ECDHE_RSA_WITH_DES_192_CBC3_SHA SSL3_EDH_RSA_DES_192_CBC3_SHA SSL3_EDH_DSS_DES_192_CBC3_SHA TLS1_ECDH_RSA_WITH_DES_192_CBC3_SHA TLS1_ECDH_ECDSA_WITH_DES_192_CBC3_SHA SSL3_RSA_FIPS_WITH_3DES_EDE_CBC_SHA SSL3_RSA_DES_192_CBC3_SHA [*] The extended renegotiation is supported ciphersuite, 0x00ff, is not counted when checking the list of ciphersuites. Otherwise, the ClientHello has these semantics: The inclusion of any cipher supported by OpenSSL 1.0.0 means that the client supports it, with the exception of SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA
Re: [tor-dev] SkypeMorph
On Sun, Mar 25, 2012 at 07:18:44PM -0400, Hooman wrote: 2- SkypeMorph and pluggable transports: Although our code can potentially be used as a pluggable transport, there is a minor difficulty with the pluggable transport framework that needs to be addressed before it can host our code. As mentioned above, our code uses Skype network for basic login stuff, so it takes a little bit more time than what Tor expect from a typical transport (like Obfsproxy), so the Tor client gives up building circuits after a while. We are aware of ORControllers tricks to solve the problem, but it does not seem to be the right way to do it and it would be awesome if the pluggable transport were able to tell Tor that it's working on setting up the connection, and that Tor shouldn't give up on it until it says it's ready. I am sure other transports could also benefit from this. I've opened https://trac.torproject.org/projects/tor/ticket/5483 for further discussion on this topic. Thanks, --Roger ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Self publishing over Tor Hidden Services
Hi, Arturo Filastò wrote (23 Mar 2012 22:45:39 GMT) : I believe this project has some common goals with the work TAILS wants to do on the TAILS server edition [1]. Sure. There's probably some work that can be shared. It's unclear to me what part of it yet, but we'll see. It's striking how different those projects are, but not as much as the fact we independently thought of proposing them for GSoC the very same year. I think it confirms something like this is needed, and I'm glad of seeing this happen. Tails server and APAF share something important: they don't exist yet. There are a few big differences between Tails server and APAF, though. Let me mention some of those, and we'll see what we can learn from this. At least I'm sure comparing Tails server with APAF will help clarify what Tails server would be :) Amnesia vs. post-mortem analysis of the equipment -- Tails server is likely to be based on Tails (no kidding), inheriting much, if not all, of its threat model and specification, including taking radical measures to avoid writing anything to local storage media unless the user explicitly asks for it. I did not see any such thing in the APAF description. Is this part of the APAF threat model? I must say I am impressed with how far something like the TBB goes to satisfy this requirement at the application level. At some level, things get out of control of most applications anyway (hints: swap, usage of various OS functionality that may, or may not, write stuff to disk), but even if we disregard that level, I'm not sure how a webapp framework for a generic language such as Python could try to satisfy this requirement as well as the TBB. Target hardware and usage model -- As far as I understand it, APAF is aimed at running on the Desktop (that is on a desktop or laptop computer that's running a full-blown desktop environment such as GNOME). We expect most of the services provided by Tails server to run 24/7 in cupboards, garages and basements. I don't expect users to keep their desktop or laptop running and online 24/7. This is one of the reasons why Tails server should be fully functional on boxes people do not want, or cannot, use as Desktop computers anymore, e.g. because of hardware being half-broken or not powerful enough to run a modern Desktop environment plus server software. Applications -- Tails server is meant to run any existing application we add and maintain support for, building on existing blocks such as Gobby and a few others. As far as I understand it, APAF is a framework to write, and maintain, a set of brand new applications that would be bound to this specific environment -- in other words, people not interested in Tor are unlikely to ever contribute to such an application. I find the APAF approach to be very ambitious. Future -- Tails server would be a practical contribution to the FreedomBox project, that should explore some of the FreedomBox aspects: 1. In a way that's immediately useful to lots of people. 2. In a way that _practically_ attacks some of the FreedomBox technical challenges (e.g. configuration management on the long term, upgrade management, unlocking encrypted storage at boot time on a potentially headless machine). 3. With a specific threat model in mind, that's not shared by all people who {are, should be, are supposed to be, could, might} be working on the FreedomBox project. Showing them deployed, working code and systems will be much better an advocacy for anonymity, storage encryption, and location hiding, than trying to explain them why they should write support for all of this themselves. Ideally, the purpose of Tails server should be taken over by the FreedomBox some day, and the process that leads to Tails server should help the FreedomBox to actually exist some day. Sometimes, it's great to start a project while knowing right from the beginning it could very well become obsoleted by something even greater that will be maintained by, or with, entirely different people. Tails server should be able to run APAF applications, right? Cheers, -- intrigeri | GnuPG key @ https://gaffer.ptitcanardnoir.org/intrigeri/intrigeri.asc | OTR fingerprint @ https://gaffer.ptitcanardnoir.org/intrigeri/otr.asc ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Proposal 198: Restore semantics of TLS ClientHello
On Mon, Mar 26, 2012 at 3:17 AM, Robert Ransom rransom.8...@gmail.com wrote: [...] (OpenSSL before 1.0.0 did not support ECDHE ciphersuites; OpenSSL before 1.0.0e or so had some security issues with them.) Can Tor detect that it is running with a version of OpenSSL with those security issues and refuse to support the broken ciphersuites? We can detect if the version number is for a broken version, but I don't know a good way to detect if the version number is old but the issues are fixed (for example, if it's one of those Fedora versions that lock the openssl version to something older so that they don't run into spurious ABI incompatibility). I need to find out more about what the security issues actually were: when I took a quick look, the only one I was a problem with doing multithreaded access to SSL data structures when using ECC. That wouldn't be a problem for us, but if there are other issues, we should know about them. [...] Otherwise, the ClientHello has these semantics: The inclusion of any cipher supported by OpenSSL 1.0.0 means that the client supports it, with the exception of SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA which is never supported. Clients MUST advertise support for at least one of TLS_DHE_RSA_WITH_AES_256_CBC_SHA or TLS_DHE_RSA_WITH_AES_128_CBC_SHA. I'm no longer comfortable with 128-bit symmetric keys. An attacker with many messages encrypted with a 128-bit symmetric cipher can attempt a brute-force search on many messages at once, and is likely to succeed in finding keys for some messages. (See http://cr.yp.to/papers.html#bruteforce .) Hm. We'd need to check whether all the servers today support an AES256 ciphersuite. Also, wasn't there some dodgy issue in the AES256 key schedule? Or is that basically irrelevant? [...] The proposed spec change above tries to future-proof ourselves by not declaring that we support every declared cipher, in case we someday need to handle a new Firefox version. If a new Firefox version comes out that uses ciphers not supported by OpenSSL 1.0.0, we will need to define whether clients may advertise its ciphers without supporting them; but existing servers will continue working whether we decide yes or no. Why standardize on OpenSSL 1.0.0, rather than OpenSSL 1.0.1? 1.0.0 is good enough to get everything we need for ff8+. Also, when I wrote the document, 1.0.0 was pretty ubiquitous but 1.0.1 had only been out for a few days. We could do 1.0.1, I guess. [...] Can we get OpenSSL to support the dubious FIPS suite excluded above, in order to remove a distinguishing opportunity? It is not so simple as just editing the SSL_CIPHER list in s3_lib.c, since the nonstandard SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA cipher is (IIUC) defined to use the TLS1 KDF, while declaring itself to be an SSL cipher (!). Would that FIPS ciphersuite provide forward secrecy? If not, then there is no point in having clients or servers implement it. The idea would be that, so long as we advertise ciphers we can't support, an MITM adversary could make a Tor detector by forging ServerHello responses to choose the FIPS suite, and then seeing whether the client can finish the handshake to the point where they realize that the ServerHello was forged. This is probably not the best MITM Tor-detection attack, but it might be nice to stomp them as we find them. [...] [**] Actually, I think it's the Windows SChannel cipher list we should be looking at here. [***] If we did _that_, we'd want to specify that CREATE_FAST could never be used on a non-forward-secure link. Even so, I don't like the implications of leaking cell types and circuit IDs to a future compromise. A relay whose link protocol implementations can't provide forward secrecy to its clients cannot be used as an entry guard -- it would be overloaded with CREATE cells very quickly. Why is that? It shouldn't be facing more than 2x the number of create cells that a relay faces, and with the ntor handshake, create cell processing ought to get much faster. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Missing methodname in proposal 180 example
On Sun, Mar 25, 2012 at 1:02 AM, David Fifield da...@bamsoftware.com wrote: I found a little typo in proposal 180. David Fifield Thanks; just merged it. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] SkypeMorph
On 12-03-25 09:37 PM, Roger Dingledine wrote: On Sun, Mar 25, 2012 at 07:18:44PM -0400, Hooman wrote: In our recent work, SkypeMorph [2], we have tried to use Skype video communications as our target protocol for protocol obfuscation. SkypeMorph functionality is similar to Obfsproxy, but the connection between the bridge and the client looks like a Skype video call (the details of how we do this is discussed in the technical report). Hi Hooman, Looks like a great first release. Thanks for sharing it with us! Can you give us some guesses about next steps for resolving these issues (or explaining why they aren't actually as worrisome as they appear)? A) It looks like the transport has no notion of adapting to network conditions, i.e. congestion control. So it will basically fall apart on a low-bandwidth or congested network. True, but as mentioned in section 8.2 of the technical report, this can be fixed by considering Skype video calls on different networks, depending on the network status. (the way Skype bandwidth usage varies with available bandwidth is studied, for example: http://www.tlc-networks.polito.it/oldsite/mellia/papers/skype_info08.pdf ) B) It sends at a constant rate of 43KB/s in each direction all the time. Even if users are willing to tolerate that, it doesn't scale on the bridge/relay side if there are lots of users. I wonder how feasible a traffic shaping approach would be (where the flow rate drops off if there's no underlying traffic), and how much that would screw with your statistics. Which leads to: 43KB/s is per connection, so each client gets this bandwidth, while the bridge can have multiple connections. C) The packet size and timing distributions only aim to match the first-order properties of Skype. At the same time, DPI vendors have already been in a battle with Skype traffic for a while now. How advanced do you think DPI vendors are at detecting Skype-like traffic, and thus at distinguishing your traffic from real Skype traffic? Similarly, how bad is it that you don't follow through with the TCP side of the Skype handshake? The TCP connections are more of control connections and they send a small number of messages during the call and we actually have some ideas on how to deal with this, like handing the sockets for these connections to our software after we fake a call. D) The morphing output is basically identical to the naive shaping. Are you sure you did it right? So as mentioned in the report, the original traffic morphing does not consider timing at all (which makes it less effective against DPIs) and it aims at minimizing the overhead, ie the number of padding bytes sent on the wire. When we introduced the inter-packet timing feature, it was no longer possible to go with the same construction, since packets may not be send right away. As a result we tried a different approach for traffic morphing: we buffered packets received from Tor, then when it is time to send the next packet, we simply estimate the original packet size by a sample form the Tor's packet size distribution. I know there are other ways this can be done, but in our experiment we didn't observe any tangible difference in the outcome. --Roger ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Proposal 198: Restore semantics of TLS ClientHello
On 2012-03-26, Nick Mathewson ni...@alum.mit.edu wrote: On Mon, Mar 26, 2012 at 3:17 AM, Robert Ransom rransom.8...@gmail.com wrote: [...] (OpenSSL before 1.0.0 did not support ECDHE ciphersuites; OpenSSL before 1.0.0e or so had some security issues with them.) Can Tor detect that it is running with a version of OpenSSL with those security issues and refuse to support the broken ciphersuites? We can detect if the version number is for a broken version, but I don't know a good way to detect if the version number is old but the issues are fixed (for example, if it's one of those Fedora versions that lock the openssl version to something older so that they don't run into spurious ABI incompatibility). I need to find out more about what the security issues actually were: when I took a quick look, the only one I was a problem with doing multithreaded access to SSL data structures when using ECC. That wouldn't be a problem for us, but if there are other issues, we should know about them. The only security issue that I knew affected ECDHE in old versions of OpenSSL was http://eprint.iacr.org/2011/633 . The paper indicates that that bug was never in any OpenSSL 1.0.0 release. Otherwise, the ClientHello has these semantics: The inclusion of any cipher supported by OpenSSL 1.0.0 means that the client supports it, with the exception of SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA which is never supported. Clients MUST advertise support for at least one of TLS_DHE_RSA_WITH_AES_256_CBC_SHA or TLS_DHE_RSA_WITH_AES_128_CBC_SHA. I'm no longer comfortable with 128-bit symmetric keys. An attacker with many messages encrypted with a 128-bit symmetric cipher can attempt a brute-force search on many messages at once, and is likely to succeed in finding keys for some messages. (See http://cr.yp.to/papers.html#bruteforce .) Hm. We'd need to check whether all the servers today support an AES256 ciphersuite. Also, wasn't there some dodgy issue in the AES256 key schedule? Or is that basically irrelevant? I am not aware of any additional bugs in AES-256 that are as severe as the small keyspace of AES-128. I am not aware of any bugs (other than very serious side-channel leaks in most implementations) in AES-256 when used with keys generated by an acceptable key-derivation function or random number generator. Can we get OpenSSL to support the dubious FIPS suite excluded above, in order to remove a distinguishing opportunity? It is not so simple as just editing the SSL_CIPHER list in s3_lib.c, since the nonstandard SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA cipher is (IIUC) defined to use the TLS1 KDF, while declaring itself to be an SSL cipher (!). Would that FIPS ciphersuite provide forward secrecy? If not, then there is no point in having clients or servers implement it. The idea would be that, so long as we advertise ciphers we can't support, an MITM adversary could make a Tor detector by forging ServerHello responses to choose the FIPS suite, and then seeing whether the client can finish the handshake to the point where they realize that the ServerHello was forged. This is probably not the best MITM Tor-detection attack, but it might be nice to stomp them as we find them. Does OpenSSL validate the certificate chain at all before Tor allows it to complete the TLS handshake? If not, They can MITM a user's connection by sending a ServerHello with an invalid certificate chain (e.g. one in which a certificate is not signed correctly), and see whether the client completes the TLS handshake like Tor or closes the connection like a normal client. [**] Actually, I think it's the Windows SChannel cipher list we should be looking at here. [***] If we did _that_, we'd want to specify that CREATE_FAST could never be used on a non-forward-secure link. Even so, I don't like the implications of leaking cell types and circuit IDs to a future compromise. A relay whose link protocol implementations can't provide forward secrecy to its clients cannot be used as an entry guard -- it would be overloaded with CREATE cells very quickly. Why is that? It shouldn't be facing more than 2x the number of create cells that a relay faces, and with the ntor handshake, create cell processing ought to get much faster. Clients often produce rapid bursts of circuit creation. If bursts of CREATE cells from two or three clients hit an entry guard at the same time, the guard could be overloaded. I expect that this link-protocol change will be deployed before a new circuit-extension protocol is deployed. I expect that the ntor handshake will not be deployed. Robert Ransom ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Proposal: Integration of BridgeFinder and BridgeFinderHelper
On 2012-03-22, Mike Perry mikepe...@torproject.org wrote: Thus spake Robert Ransom (rransom.8...@gmail.com): [ snip ] Ok, attempt #2. This time I tried to get at the core of your concerns about attacker controlled input by requring some form of authentication on all bridge information that is to be automatically configured. I rewrote most of the ‘Security Concerns’ section for BridgeFinder/Helper. Please merge: https://git.torproject.org/rransom/torspec.git bridgefinder2 Security Concerns: BridgeFinder and BridgeFinderHelper 1. Do not allow attacks on your IPC channel by malicious local 'live data' The biggest risk is that unexpected applications will be manipulated into posting malformed data to the BridgeFinder's IPC channel as if it were from BridgeFinderHelper. The best way to defend against this is to require a handshake to properly complete before accepting input. If the handshake fails at any point, the IPC channel MUST be abandoned and closed. Do *not* continue scanning for good input after any bad input has been encountered; that practice may allow cross-protocol attacks by malicious JavaScript running in the user's non-Tor web browser. Additionally, it is wise to establish a shared secret between BridgeFinder and BridgeFinderHelper, using an environment variable if possible. For an a good example on how to use such a shared secret properly for authentication; see Tor ticket #5185 and/or the SAFECOOKIE Tor control port authentication method. 2. Do not allow attacks against the Controller Care has to be taken before converting BridgeFinderHelper data into Bridge lines, especially for cases where the BridgeFinderHelper data is fed directly to the control port after passing through BridgeFinder. In specific, the input MUST be subjected to a character whitelist and should also be validated against a regular expression to verify format, and if any unexpected or poorly-formed data is encountered, the IPC channel MUST be closed. Malicious control-port commands can completely destroy a user's anonymity. BridgeFinder is responsible for preventing strings which could plausibly cause execution of arbitrary control-port commands from reaching the Controller. 3. Provide information about bridge sources to users BridgeFinder MUST provide complete information about how each bridge was obtained (who provided the bridge data, where the party which provided the data intended that it be sent to users, and what activities BridgeFinder extracted the data from) to users so that they can make an informed decision about whether to trust the bridge. BridgeFinder MUST authenticate, for every piece of discovered bridge data, the party which provided the bridge address, the party which prepared the bridge data in BridgeFinder's input format, and the time, location, and manner in which the latter party intended that the bridge data be distributed. (Use of an interactive authentication protocol is not sufficient to authenticate the intended location and manner of distribution of the bridge data; those facts must be explicitly authenticated.) These requirements are intended to prevent or mitigate several serious attacks, including the following: * A malicious bridge can 'tag' its client's circuits so that a malicious exit node can easily recognize them, thereby associating the client with some or all of its anonymous or pseudonymous activities. (This attack may be mitigated by new cryptographic protocols in a near-future version of Tor.) * A malicious bridge can attempt to limit its client's knowledge of the Tor network, thereby biasing the client's path selection toward attacker-controlled relays. * A piece of bridge data containing the address of a malicious bridge may be copied to distribution channels other than those through which it was intended to be distributed, in order to expose more clients to a particular malicious bridge. * Pieces of bridge data containing the addresses of non-malicious bridges may be copied to other-than-intended distribution channels, in order to cause a particular client to attempt to connect to a known, unusual set of bridges, thus allowing a malicious ISP to monitor the client's movements to other network and/or physical locations. BridgeFinder MUST warn users about the above attacks, and warn that other attacks may also be possible if users accept improperly distributed bridge data. 4. Exercise care with what is written to disk BridgeFinder developers must be aware of the following attacks, and ensure that their software does not expose users to any of them: * An attacker could plant
Re: [tor-dev] Proposal: Integration of BridgeFinder and BridgeFinderHelper
Thus spake Robert Ransom (rransom.8...@gmail.com): I rewrote most of the ‘Security Concerns’ section for BridgeFinder/Helper. Please merge: https://git.torproject.org/rransom/torspec.git bridgefinder2 Security Concerns: BridgeFinder and BridgeFinderHelper 1. Do not allow attacks on your IPC channel by malicious local 'live data' The biggest risk is that unexpected applications will be manipulated into posting malformed data to the BridgeFinder's IPC channel as if it were from BridgeFinderHelper. The best way to defend against this is to require a handshake to properly complete before accepting input. If the handshake fails at any point, the IPC channel MUST be abandoned and closed. Do *not* continue scanning for good input after any bad input has been encountered; that practice may allow cross-protocol attacks by malicious JavaScript running in the user's non-Tor web browser. Additionally, it is wise to establish a shared secret between BridgeFinder and BridgeFinderHelper, using an environment variable if possible. For an a good example on how to use such a shared secret properly for authentication; see Tor ticket #5185 and/or the SAFECOOKIE Tor control port authentication method. 2. Do not allow attacks against the Controller Care has to be taken before converting BridgeFinderHelper data into Bridge lines, especially for cases where the BridgeFinderHelper data is fed directly to the control port after passing through BridgeFinder. In specific, the input MUST be subjected to a character whitelist and should also be validated against a regular expression to verify format, and if any unexpected or poorly-formed data is encountered, the IPC channel MUST be closed. Malicious control-port commands can completely destroy a user's anonymity. BridgeFinder is responsible for preventing strings which could plausibly cause execution of arbitrary control-port commands from reaching the Controller. 3. Provide information about bridge sources to users BridgeFinder MUST provide complete information about how each bridge was obtained (who provided the bridge data, where the party which provided the data intended that it be sent to users, and what activities BridgeFinder extracted the data from) to users so that they can make an informed decision about whether to trust the bridge. I like the idea of passing bridge authentication + attribution up to Vidalia via POSTMESSAGE somehow. However, encoding it properly is likely to be problematic and situation-specific. It also feels weird to have this be a MUST, especially if we're not sure how it can be done right.. BridgeFinder MUST authenticate, for every piece of discovered bridge data, the party which provided the bridge address, the party which prepared the bridge data in BridgeFinder's input format, and the time, location, and manner in which the latter party intended that the bridge data be distributed. (Use of an interactive authentication protocol is not sufficient to authenticate the intended location and manner of distribution of the bridge data; those facts must be explicitly authenticated.) These requirements are intended to prevent or mitigate several serious attacks, including the following: * A malicious bridge can 'tag' its client's circuits so that a malicious exit node can easily recognize them, thereby associating the client with some or all of its anonymous or pseudonymous activities. (This attack may be mitigated by new cryptographic protocols in a near-future version of Tor.) * A malicious bridge can attempt to limit its client's knowledge of the Tor network, thereby biasing the client's path selection toward attacker-controlled relays. * A piece of bridge data containing the address of a malicious bridge may be copied to distribution channels other than those through which it was intended to be distributed, in order to expose more clients to a particular malicious bridge. * Pieces of bridge data containing the addresses of non-malicious bridges may be copied to other-than-intended distribution channels, in order to cause a particular client to attempt to connect to a known, unusual set of bridges, thus allowing a malicious ISP to monitor the client's movements to other network and/or physical locations. BridgeFinder MUST warn users about the above attacks, and warn that other attacks may also be possible if users accept improperly distributed bridge data. Warning users about technically complicated potential attacks is unlikely to substantially protect them in practice, and will have many
Re: [tor-dev] Implement JSONP interface for check.torproject.org
Oh, I forgot to mention one requirement: check.torproject.org must be usable by people who have turned off JavaScript in their browser (whether TBB or not). That rules out XmlHttpRequest. Robert Ransom ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev