Re: [tor-dev] [Proposal] A simple way to make Tor-Browser-Bundle more portable and secure
libc is dynamically linked so one distribution-level upgrade will fix one libc problem. As opposed to having to rebuild every single program and trying to ship that to users in a huge update. The former is less complex. Statically linking shifts the burden of tracking and fixing security bugs, away from the maintainers of that library who know it best, onto the program that does the static linking. Big development teams like Google can handle this responsibility and expend the extra cost needed to make it work. Most other projects cannot. Also many people, such as yourself, don't even seem to be aware in this fundamental shift in responsibility and development costs. This is quite dangerous and naive. Ivan Markin: > Yawning Angel: >> Having to rebuild the browser when the libc needs to be updated seems >> terrible as well. > > Why is it terrible? > Using static linking drastically reduces overall *complexity* > (~1/security). If you do use libc code in your stuff then it's a part of > this stuff. If there is a bug in libc - just rebuild your broken > software. It either works or not. Doing dynamic linking is leaving it in > superposition state. > > I consider having the browser that builds for >30m is way more terrible. > > From > https://wayback.archive.org/web/20090525150626/http://blog.garbe.us/2008/02/08/01_Static_linking/ > : > >> I prefer static linking: > >> Executing statically linked executables is much faster, because there >> are no expensive shared object lookups during exec(). >> Who cares? >> Statically linked executables are portable, long lasting and fail >> safe to ABI changes -- they will run on the same architecture even in >> 10 years time. Never expect errors like >> /lib/ssa/libstdc++.so.6:version 'GLIBCXX_3.4.4' not found again. >> When was the last time someone got this error? Like, 1992? >> Statically linked executables use less disk space. Most executables >> use only a small subset of the functions provided by a static library >> -- so there is absolutely no reason to link complete static libraries >> into a static executable (e.g. spoken for a hello_world.c you only >> need to link vprintf statically into the executable, not the whole >> static libc!). The contrary is true for dynamic libraries -- you >> always use the whole library, regardless what functions you are >> using. >> >> Statically linked executables consume less memory because their >> binary size is smaller and they only map the functions they depend on >> into memory (contrary to dynamic libs). >> Go and dynamically vs statically link libc to a "hello world" program right now and tell me what the size is. >> The reason why dynamic linking has been invented was not to decrease >> the general executable sizes or to save memory consumption, or to >> speed up the exec() -- but to allow changing code during runtime -- >> and that's the real purpose of dynamic linking, we shouldn't forget >> that. > This guy is rewriting history. It sounds like he's talking about dlopen / dynamic loading, which is not the same thing as dynamic linking. Please do some research before believing random shit on the internet. X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE https://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] A meta-package for Pluggable Transports?
Dafwig: > Ximin Luo: >> I made something like this a few years ago: >> >> https://people.debian.org/~infinity0/apt/pool/contrib/t/tor-bridge-relay/ > > A general question, but related to the sample configuration you've > provided here, as well as other instructions I've seen online: > > I've heard that the Chinese government tests suspected bridges by > attempting to connect to them and seeing if they respond to the Tor > protocol. Whether China is actually doing this or not, it would not be a > terribly difficult thing for any competent censor to do. > > So with this in mind, wouldn't it be best for new bridges to support > ONLY obfs4, and not any of the older protocols? > > In particular, it seems like a very bad idea to enable the default > ORPort (9001), unless I'm missing something. Is it actually necessary to > have an open ORPort in order to work as an obfuscated bridge? If it is > necessary, at least that port should be picked randomly to make it > harder for censors to guess. If it's not needed, then that port should > presumably be set to listen only on 127.0.0.1 or similar. > > This isn't mentioned in any of the bridge-related documentation I've > seen, though I haven't looked very hard. > You are quite probably right. The thing I posted was a sample "initial attempt" and not an end product. (Other protocols like snowflake and meek-client might also be OK since they also don't expose a public listen port.) X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE https://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] A meta-package for Pluggable Transports?
Ximin Luo: > Nima Fatemi: >> [..] >> >> After some discussion on #tor-project a little while ago, the idea of >> having a meta-package that includes all or the most recent transports >> came up. Where people would install this meta package and it would >> automatically take care of the required steps to get the latest >> obfsproxy and set it up. >> >> [..] > > I made something like this a few years ago: > > https://people.debian.org/~infinity0/apt/pool/contrib/t/tor-bridge-relay/ > > I had planned to add a lot more things to it, like what you suggested in the > rest of the email, but never got around to it. > > Someone else could take this as a base to start working from, though. > One reason is because #1922 was never completed. I built the packaging assuming that it would be fixed by the time I finished it, but this hasn't happened yet. X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE https://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] A meta-package for Pluggable Transports?
Nima Fatemi: > [..] > > After some discussion on #tor-project a little while ago, the idea of > having a meta-package that includes all or the most recent transports > came up. Where people would install this meta package and it would > automatically take care of the required steps to get the latest > obfsproxy and set it up. > > [..] I made something like this a few years ago: https://people.debian.org/~infinity0/apt/pool/contrib/t/tor-bridge-relay/ I had planned to add a lot more things to it, like what you suggested in the rest of the email, but never got around to it. Someone else could take this as a base to start working from, though. X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE https://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Git users, enable fsck by default!
On 02/02/16 18:56, Peter Palfrader wrote: > On Tue, 02 Feb 2016, Nick Mathewson wrote: > >> The tl;dr here is: >>* By default Git doesn't verify the sha1 checksums it receives by default. >>* It doesn't look like we've got any inconsistencies in our >> repositories I use, though. That's good! >>* To turn on verification, I think you run: >> >> git config --add transfer.fsckobjects true >> git config --add fetch.fsckobjects true >> git config --add receive.fsckobjects true > > I suspect that setting things globally (in your ~/.gitconfig) > git config --global --add transfer.fsckobjects true > git config --global --add fetch.fsckobjects true > git config --global --add receive.fsckobjects true > might also work. (However, I haven't verified it.) > Tested with $ for i in transfer fetch receive; do git config --global --replace-all "$i.fsckObjects" true; done (--replace-all makes it idempotent). I wrote "fsckObjects" because it's quicker to verify - the man page for git-config says fsckObjects rather than fsckobjects and then you need to do some extra digging to assure yourself it's case-insensitive. X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Fwd: [guardian-dev] RC-7 is out - try to repro? Re: towards reproducible Orbot
Hey, thanks for this! I'd like to ask please upload this (or 15.1) to F-Droid soon. The version currently in F-Droid is RC-4 and is broken with orwall - orwall-allowed/redirected apps see no internet, but tor is connected and orfox can access it. RC-7 fixes this issue, I just tested it out. X -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] tor and libressl
On 20/02/15 23:01, Tyrano Sauro wrote: I got tor build with libressl. it works. Is this a good idea? TY Could you write some more details about how you got this to work? For example, did you link in libressl during the build, did you have to change anything, or did you just drop-in libressl.so (or whatever) to a pre-built tor and have everything work? X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Hidden Service authorization UI
On 09/11/14 12:50, George Kadianakis wrote: Hidden Service authorization is a pretty obscure feature of HSes, that can be quite useful for small-to-medium HSes. Basically, it allows client access control during the introduction step. If the client doesn't prove itself, the Hidden Service will not poroceed to the rendezvous step. This allows HS operators to block access in a lower level than the application-layer. It also prevents guard discovery attacks since the HS will not show up in the rendezvous. It's also a way for current HSes to hide their address and list of IPs from the HSDirs (we get this for free in rend-spec-ng.txt). In the current HS implementation there are two ways to do authorization: https://gitweb.torproject.org/torspec.git/blob/HEAD:/rend-spec.txt#l768 both have different threat models. https://gitweb.torproject.org/torspec.git/blob/HEAD:/rend-spec.txt#l936 936 client-key NL a public key in PEM format A private key is what's actually generated. Not sure if it's a bug in the spec, or a bug in tor. From a quick read of the rest of it, I'm guessing the spec? X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Call for a big fast bridge (to be the meek backend)
On 16/09/14 03:12, David Fifield wrote: The meek pluggable transport is currently running on the bridge I run, which also happens to be the backend bridge for flash proxy. I'd like to move it to a fast relay run by an experienced operator. I want to do this both to diffuse trust, so that I don't run all the infrastructure, and because my bridge is not especially fast and I'm not especially adept at performance tuning. All you will need to do is run the meek-server program, add some lines to your torrc, and update the software when I ask you to. The more CPU, memory, and bandwidth you have, the better, though at this point usage is low enough that you won't even notice it if you are already running a fast relay. I think it will help if your bridge is located in the U.S., because that reduces latency from Google App Engine. The meek-server plugin is basically just a little web server: https://gitweb.torproject.org/pluggable-transports/meek.git/tree/HEAD:/meek-server Since meek works differently than obfs3, for example, it doesn't help us to have hundreds of medium-fast bridges. We need one (or maybe two or three) big fat fast relays, because all the traffic that is bounced through App Engine or Amazon will be pointed at it. My PGP key is at https://www.bamsoftware.com/david/david.asc if you want to talk about it. As an extension, how about putting multiple bridges behind the reflector? Tor does not yet pass the bridge fingerprint to PTs, but we could hack it up along the lines of: Bridge meek 0.0.2.0:1 $FINGERPRINT1 fpr=$FINGERPRINT1 url=https://meek-reflect.appspot.com/ front=www.google.com Bridge meek 0.0.2.0:1 $FINGERPRINT2 fpr=$FINGERPRINT2 url=https://meek-reflect.appspot.com/ front=www.google.com meek-client would pass fpr to the reflector, who would select the bridge it connects the client to. (This is basically what I have in mind for #10196 for flashproxy.) X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Draft proposal: Tor Consensus Transparency
(Disclaimer, I don't know the details of how consensus documents work. Some assumptions I made might be wrong.) In section 2 Motivation, you mention a partition attack. I think the rest of the document neglects the topic of *actually protecting against this*, instead focusing only on running the log. I want to emphasise that CT logs are only an enabler, not a guarantor, of the security property. The monitors are the important thing, that actually detect attacks. (Someone needs to be watching, otherwise all security measures are useless.) With X509, domain owners have the rightful control on what certificates say, but CAs can issue certs on their behalf. CT monitors protect against this, by getting data from logs and checking that the certificates for each domain have all been issued with the permission of the owner. With the tor consensus document model, it's more vague exactly what the monitors should check for. To protect against a partition attack, you might do something like monitors should check that the diff between consensus documents is small, within a small time period. But the precise parameters will need to be adjusted and justified, and this is important for robust detection of the attack. Also, are there any other threats you want to protect against? A log monitor verifies that the contents of the log is consistent with the rules of the Tor network, notably that all entries are properly formed and signed Tor consensus documents. Don't Tor clients do this already? CT gives us the ability to detect things that a single client cannot already detect, such as a partition attack on the overall network, so we should focus on that. X On 05/07/14 01:05, Linus Nordberg wrote: Hi Tor devs, It's surprisingly hard to work on Tor during the Tor developer meetings! My apologies for not publishing this text until now, despite my repeated ranting about the subject the last few days. Well, here it is, in an early draft version. Thank you all who've listened patiently and given valuable feedback. I welcome more feedback from the list. Thanks in advance. --8---cut here---start-8--- Filename: xxx-tor-consensus-transparency.txt Title: Tor Consensus Transparency Author: Linus Nordberg Created: 2014-06-28 Status: Draft 0. Introduction WARNING!!! EARLY DRAFT -- MISSING IMPORTANT BITS AND PIECES! This document describes how to provide and use public, append-only, untrusted logs containing Tor consensus documents, much like what Certificate Transparency [RFC6962] does for X.509 certificates. Tor relays and clients can then refuse using a consensus not present in logs of their choosing. WARNING!!! EARLY DRAFT -- MISSING IMPORTANT BITS AND PIECES! 1. Overview Using a public, append-only, untrusted log like the history tree described in [CrosbyWallach], Tor clients and relays verify that consensus documents are present in one or more logs before using them. Consensus-users, i.e. Tor clients and relays, expect to receive one or more proof of inclusions with new consensus documents. A proof of inclusion is a hash sum representing the tree head of a log, signed by the logs private key, and an audit path listing the nodes in the tree needed to recreate the tree head. Consensus-users are configured to use one or more logs by listing a log address and a public key for each log. This is used to verify that a given consensus document is present in a given log. Anyone can submit a properly formatted and signed consensus document to a log and get a signed proof of inclusion in return. Directory authorities should do this and include the proofs when serving consensus documents. Directory caches and consensus-users receiving a consensus not including a proof of inclusion submit the document and use the proof they receive in return. Auditing log behaviour and monitoring the contents of logs is performed in cooperation between the Tor network and external services. Directory caches act as log auditors with help from Tor clients gossiping about what they see. Directory authorities are good candidates for monitoring log content since they know what documents they have issued. Anybody can run both an auditor and a monitor though, which is an important property of the proposed system. Summary of proposed changes to Tor: - Directory authorities start submitting newly created consensuses to at least one public log. - Tor clients and relays receiving a consensus not accompanied by a proof of inclusion start submitting that to at least one public log. - Consensus-users start rejecting consensuses accompanied by an invalid proof of inclusion. - A new cell type LOG_GOSSIP is defined, for clients and relays to exchange information about tree heads
[tor-dev] Composing multiple pluggable transports
Hi Steven, Nikita, I was told that you two are interested in the idea of composing multiple PTs together. Here are our ideas on it. We have a GSoC student, Quinn also at Illinois, working on turning this into reality. ## Concepts On the most abstract level, pt-spec.txt defines an input interface to some generic component. It consists of the following: - dest addr, of the Bridge - headers/metadata, such as fingerprint[1] or other PT-level settings - data, the actual application-layer stream, such as OR protocol The concrete form of this is the SOCKS protocol, which allows tor to make a request with the above interface. (Actually, SOCKS does not fully support metadata, which means we've had to extend it ourselves. HTTP might have worked better.) pt-spec.txt does not specify the output interface. This means it's impossible to chain general PTs, because there's nothing defined to chain. To work around this, we observe that in practise, many PTs follow the below output interface; we'll call these direct PTs: - data, sent directly to the (TCP) address given at the input Direct PTs include obfsproxy, scramblesuit, fteproxy. Indirect PTs are all other PTs, that do something different other than a straight TCP connection *to the endpoint Bridge address*. These include flashproxy and meek. ## Design Our combiner will chain up a sequence of direct PTs, then the last PT can be any PT (either direct or indirect). So for example it could potentially support obfs3|fte|fte|fte|flashproxy and obfs3|fte but not flashproxy|obfs3|obfs3|meek. Not every chain makes sense from a security viewpoint, of course. Because the output interface (TCP) does not exactly match the input interface (SOCKS), we have a component called a shim, which has an input interface of TCP and an output interface of SOCKS. This is placed between each pair of PTs in the chain. In simple terms, it works like this: pt0 (out) -TCP- (in) shim1 (out) -SOCKS-to-next-shim- pt1 - to shim2 The extra info present in SOCKS (dest addr and metadata) absent from TCP, will be supplied by the combiner, as described in the next section. Also, in practise, these shims are within the same process as the combiner, there is no need to start a new process for these. ## Algorithm Let the ith PT be listening on port pPT[i], done at program start. Later, when tor wants to connect to a PT-chain, we intercept this connection, extracting the following information (the SOCKS in-interface): - dest addr, of the Bridge - headers. generic headers, plus PT-specific headers for each PT in the chain[2] - data, OR protocol Then, the combiner starts a new shim, one for each component in the chain, each listening on pS[i]. Each shim[i] is set-up so that when it receives a connection on pS[i], it tells PT[i] (i.e. the SOCKS client listening on pPT[i]) to connect to pS[i+1], with the metadata set to the generic headers plus specific headers for PT[i], and the data set to whatever it receives from the connection. (In practise the shims are set-up in reverse order, because shim[i] needs to know what pS[i+1] is, and we want to take advantage of OS's feature to listen on any port.) Special cases: - The last (i.e. (n-1)th) shim tells its SOCKS client to connect to the original dest addr, of the Bridge - there is no pS[last+1]. - The first shim does not need to exist, since the combiner is just sending data to itself so can do this in-process After all the shims are set up, the combiner starts forwarding data from tor over onto PT[0]. Then the magic is complete. ASCII diagram, minus annotations about metadata: +--+ socks|PT| [ tor ] to--- | combiner | bridge +- | | | +--+ in-process | v +---+ socks | direct| tcp to - | PT[0] | to -+ pS[1] | | pS[1] | +---+| +--+ | +---+ | socks | direct| tcp +-[ shim[1] ] to - | PT[1] | to -+ pS[2] | | pS[2] | +---+| +--+ | | [etc] | | +--+ | +---+ | socks | direct| tcp +-[ shim[y] ] to - | PT[y] | to -+ pS[z] | | pS[z] | +---+| +--+ | +---+ | socks | any | +-[ shim[z] ] to - | PT[z] | whatever it wants -- bridge | |
[tor-dev] New events calendar
Hi all, I decided to create a new calendar, since the old one is out-of-date and we seem to have lost the access. View as a web page: https://www.google.com/calendar/embed?src=dt92shou5q1ooe1kptubhclo4s%40group.calendar.google.com Import into your calendar program: https://www.google.com/calendar/ical/dt92shou5q1ooe1kptubhclo4s%40group.calendar.google.com/public/basic.ics If anyone would like write-access, or for me to make a one-off change, just ask! Please also update any links to the old calendar in trac; I forget where they all are. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Improving the structure of indirect-connection PTs (meek/flashproxy)
On 16/04/14 15:56, George Kadianakis wrote: Ximin Luo infini...@torproject.org writes: snip So instead of having, as currently: (old, hacky) Bridge flashproxy (dummy addr) We would have the following cases: (1) Bridge flashproxy (real addr) (2) Bridge flashproxy (real addr) (fingerprint) (3, not-ideal) Bridge flashproxy (dummy addr) (fingerprint) Option (3) is quite nice, since in indirect PTs the actual address is irrelevant - the Tor client never tries to connect to it. I suggest that we have a special syntax for it though, to explicitly discourage hacks that {use dummy addresses but which are treated as real addresses by the underlying application}, since this breaks assumptions of the PT spec. Hm, but this kind of kills the magic of indirect PTs, right? That is, users who want to use flashproxy in the way above, will have to know an address or a fingerprint of the bridge beforehand. What is the use case? Advanced users? I guess most users (people who use the TBB) will still need to use the current scheme, right? We can distribute the fingerprints of the default meek/fp Bridges in the default torrc, just like we distribute non-authenticated defaults currently. If we introduce new ones (e.g. if the old defaults are blocked or need to be shutdown, or just to increase capacity), BridgeDB can distribute these new ones with new fingerprints. (But indirect PTs should be harder to block anyways.) Also, if all traffic goes over the midpoint, how can we make sure that the midpoint will connect us to the bridge requested with: (1) Bridge flashproxy (real addr) ? Yes, this by itself probably doesn't gain that much, I just included it for completeness. (If we imagine a PT that has a non-secret but parameterised obfuscation method (bananaphone?), then we would need this sort of thing if we wanted to use a controller to multiplex between multiple of those Bridges. But ideally everything would have a fingerprint and be strongly authenticated.) X FWIW, I liked your argument with regards to authentication, and David's reply citing a few tickets that detail the (lack of) threat model for Tor bridges... -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Improving the structure of indirect-connection PTs (meek/flashproxy)
On 16/04/14 16:11, Ximin Luo wrote: On 16/04/14 15:56, George Kadianakis wrote: Ximin Luo infini...@torproject.org writes: Hm, but this kind of kills the magic of indirect PTs, right? That is, users who want to use flashproxy in the way above, will have to know an address or a fingerprint of the bridge beforehand. What is the use case? Advanced users? I guess most users (people who use the TBB) will still need to use the current scheme, right? We can distribute the fingerprints of the default meek/fp Bridges in the default torrc, just like we distribute non-authenticated defaults currently. If we introduce new ones (e.g. if the old defaults are blocked or need to be shutdown, or just to increase capacity), BridgeDB can distribute these new ones with new fingerprints. (But indirect PTs should be harder to block anyways.) I suppose that, with an indirect PT, it is no longer necessary to connect to a Bridge - we should be able to connect, via the midpoint, directly to a normal public entry node. Then its fingerprint would be available in the consensus. Then as you say, the user would not need to bother with fingerprints (or Bridge lines at all), and I can definitely see why this was a strong motivator in the current design of meek/fp. (This would be similar to the 5-hop separate untrusted bridges vs trusted guard suggestion in David's post, but cutting out the untrusted bridge part.) So, I'd support an effort to move in this direction as well. However, it would take more changes and more thought than my original proposal, though it's also strictly better than it, I think - i.e. more flexibility, more usability, no less security. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] Improving the structure of indirect-connection PTs (meek/flashproxy)
## Background Pluggable Transports are proxy programs that help users bypass censorship. [App client] - XXX EVIL CENSOR HAS YOU XXX ACCESS DENIED XXX [App client] - [PT client] - (the cloud!) - [PT server] - [App server] The structural design, on the client side, is roughly: 1. App client specifies an endpoint to reach 2. PT client receives an instruction, via SOCKS, to connect to this endpoint 3. PT client does its thing, magic happens (intentionally vague) ## In Tor Each endpoint is specified by a Bridge line, in the form of an IP address and an optional fingerprint (for authentication). This point is not made more important in existing docs, but is important for the topic of this email: both the IP address and the fingerprint are potential *identifiers* of the endpoint. The former is an impure name, the latter a pure name. Currently, we have two main types of PT: - direct PTs - connect to the endpoint directly via a TCP connection - these PTs don't try to hide the fact that you're contacting X on addrX. - instead, they usually transform the traffic so it's not identifiable - e.g. obfs3, fte, scramblesuit - indirect PTs - connect to the endpoint indirectly, via special means - flashproxy - connects via an ephemeral browser proxy - meek - connects via an online web service I will now argue that indirect PTs should do things in a specific way, which is *not* the way meek and flashproxy currently does things. ## Meek and flashproxy Meek and flashproxy provide an indirect way of accessing Tor. Instead of connecting directly to a Bridge (which might be blocked), the client connects via a midpoint that is harder to block. Very very roughly, (meek/fp controller) [meek/fp client] - [meek/fp midpoint] - [freedom!] The Bridge line in the user's torrc is completely ignored, we use a dummy value, like: Bridge flashproxy 0.0.1.0:1 Instead, it is the controller that decides which endpoint (which Bridge) the midpoint should connect to. (In meek, the controller is the same entity as the midpoint, but it helps our analysis to consider the two functions separately.) ## The problem The problem with the above structure, is that it is incompatible with the metaphor of connecting to a specific endpoint. This is what the PT spec is about, even though it does not explicitly mention this viewpoint. Instead, meek and flashproxy provide the metaphor of connecting to a global homogeneous service. This has positive consequences, such as the user no longer having to bother to find Bridges, but also has several negative consequences: 1. The Tor client can no longer authenticate the endpoint. Although currently Tor makes this optional, it is strongly recommended, to prevent a MitM between the client and the server. Even if the midpoint does this, this is not end-to-end authentication that we would require for strong security. 2. Since the endpoints are not chosen by the user, this may have consequences for anonymity. IANAAR, but this has not yet been looked into. 3. The Tor client (and other applications that use the PT spec) internally use the endpoints metaphor. They may make performance assumptions based on endpoints being configured with different addresses. (Perhaps also security assumptions, although perhaps not due to having to defend against the sybil attack anyway.) Breaking this metaphor is not a good design principle. 4. An application like i2p, where each peer cares much more about *exactly which* endpoint it connects to (e.g. because e2e fingerprint authentication is mandatory), means the metaphor of endpoints even more important. They will not be able to take advantage of these indirect-connection PTs. 5. Chaining a PT that *requires* strong identification (e.g. scramblesuit, for c2s auth) is impossible under this scheme, since the end client cannot select the right server to authenticate against. ## The solution The solution is simple: the indirect PT client simply has to actually *make use* of the Bridge line, instead of totally dropping this information. The meek/flashproxy controllers offer service to a finite set of Bridges. [A] The client should be able to select one of these, specify its fingerprint and any other shared secrets, on their torrc Bridge line, and the indirect-PT will tell the controller to connect *to specifically this Bridge*. The controller should honour this request. If it doesn't and the fingerprint is specified, it will be caught out by the Tor client. So instead of having, as currently: (old, hacky) Bridge flashproxy (dummy addr) We would have the following cases: (1) Bridge flashproxy (real addr) (2) Bridge flashproxy (real addr) (fingerprint) (3, not-ideal) Bridge flashproxy (dummy addr) (fingerprint) Option (3) is quite nice, since in indirect PTs the actual address is irrelevant - the Tor client never tries to connect to it. I suggest that we have a special syntax for it though, to explicitly discourage hacks that {use dummy
Re: [tor-dev] Improving the structure of indirect-connection PTs (meek/flashproxy)
On 15/04/14 14:03, Ximin Luo wrote: (3, not-ideal) Bridge flashproxy (dummy addr) (fingerprint) Option (3) is quite nice, since in indirect PTs the actual address is irrelevant - the Tor client never tries to connect to it. I suggest that we have a special syntax for it though, to explicitly discourage hacks that {use dummy addresses but which are treated as real addresses by the underlying application}, since this breaks assumptions of the PT spec. For example, (3, better) Bridge flashproxy - (fingerprint) We would add to the PT spec, something like: - is a special hostname syntax in Bridge lines. It means that the address of this Bridge does not concern the underlying application (e.g. Tor), since it will be indirectly reached by the PT client. (If a fingerprint is given, it will still be checked by Tor.) Hmm, for this to work (select the endpoint by fingerprint only), tor will need to pass the fingerprint to the PT client during the SOCKS connection as well. It seems this is not the case from pt-spec.txt: Example: if the bridge line is bridge trebuchet www.example.com: 09F911029D74E35BD84156C5635688C009F909F9 rocks=20 height=5.6m AND if the Tor client knows that the 'trebuchet' method is supported, the client should connect to the proxy that provides the 'trebuchet' method, ask it to connect to www.example.com, and provide the string rocks=20;height=5.6m as the username, the password, or split across the username and password. Perhaps we can add the fingerprint to this, as part of Yawning's SOCKS5 extensions. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Improving the structure of indirect-connection PTs (meek/flashproxy)
On 15/04/14 19:36, David Fifield wrote: On Tue, Apr 15, 2014 at 02:03:43PM +0100, Ximin Luo wrote: ## The problem The problem with the above structure, is that it is incompatible with the metaphor of connecting to a specific endpoint. This is what the PT spec is about, even though it does not explicitly mention this viewpoint. Instead, meek and flashproxy provide the metaphor of connecting to a global homogeneous service. This has positive consequences, such as the user no longer having to bother to find Bridges, but also has several negative consequences: 1. The Tor client can no longer authenticate the endpoint. Although currently Tor makes this optional, it is strongly recommended, to prevent a MitM between the client and the server. Even if the midpoint does this, this is not end-to-end authentication that we would require for strong security. I see this somewhat differently. You still choose and authenticate the second and third hops. I heard from Roger that it is a sort of accident that bridge-using circuits use three hops, anyway. It should be that there are four: the first hop is your untrusted bridge address you got from wherever, and the second is your guard. Would a design like that make most of these issues go away? I think this would be OK conceptually, but it would extend the circuit by one hop, to 5 total hops. Currently, we have (with meek/fp): PT client - midpoint - untrusted bridge - tor relay - tor exit My proposed fix would turn it into: PT client - midpoint - trusted bridge - tor relay - tor exit Your suggestion would be analogous to: PT client - midpoint - untrusted bridge - trusted guard - tor relay - tor exit It also confuses the model a little, since the untrusted bridge does not help toward anonymity (since it can be MitMd), but is still running Tor, solely to bypass censorship. There's an old ticket here, Let bridge users specify that they don't care if their bridge changes fingerprint. https://trac.torproject.org/projects/tor/ticket/3292 which also ties with this blog post Different Ways to Use a Bridge. https://blog.torproject.org/blog/different-ways-use-bridge Completion of #3292 would be a beautiful thing, I think, for flash proxy, as it would allow us easily to round-robin multiple websocket bridges. (Currently you can't do that because the tor client freaks out; see https://trac.torproject.org/projects/tor/ticket/7153#comment:5.) If by bridge you mean authenticated relay, that is 2 hops before the exit, then I'm not sure how useful round-robin between multiple untrusted bridges really is, since this opens you up to MitM at that point. What exactly are we protecting against by refusing to use the network when A's fingerprint changes? - MitM on A, and relevation of my first-hop OR traffic to the attacker? Am I wrong here? Or is this not a big deal for anonymity? One can tweak #3292 to prevent MitM - instead of allowing *any* fingerprint, one would be able to specify multiple fingerprints for the same IP address, and the Tor client would treat these as separate Bridges (since they are separate). I believe this model is clearer and closer to reality, namely the endpoints metaphor. It's also similar to my (3) suggestion from before. Some other relevant tickets about non-authentication of bridges: analyze security tradeoffs from using a socks proxy vs a bridge to reach the Tor network https://trac.torproject.org/projects/tor/ticket/2764 For socks proxy, substitute indirect proxy, and it works the same. I think of indirect proxies like flash proxy as untrusted unauthenticated things that just get you to the Tor network, which you then authenticate, the same as a socks proxy. The quotes there that I agree with are from a *security* perspective (for a broad definition of security), is there really any difference between a socks proxy and a bridge relay? and I don't see any huge roadblocks to having bridges that are just vanilla proxies. We should deploy them if we can make them usable, and maybe someday somebody will show us it was a bad idea. Tor build variant to support lightweight socks bridge https://trac.torproject.org/projects/tor/ticket/3466 I largely agree with these quotes, but this would be assuming the socks proxy is authenticated (or, can be authenticated) *and* the end-client can completely control the second hop after it. Neither of these properties are true for the indirect proxying of meek/flashproxy. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] How to run a headless second Firefox instance?
On 09/04/14 07:29, David Fifield wrote: It gets the job done, but it sucks because the first thing you see is the dialog and you have to know not to close it. Is there a way to accomplish the same thing (keep the browser running, but don't show a browser window) without raising a conspicuous dialog? You could play further with this: $ nc -l -p $ iceweasel -no-remote -p testing -chrome http://localhost: If there's no server listening, firefox opens up a can't connect page. Otherwise, it continues running, with no UI, past the 3-minute TCP timeout mark, even if you kill nc in the meantime. However, I am not sure if the process is actually responsive, or simply hanging waiting for an HTTP response - so you'll need to test that. Also, there's no guarantee that firefox will keep this behaviour. Failing everything, maybe you could return a real XUL response that displays don't close me? X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] How to run a headless second Firefox instance?
On 09/04/14 16:31, Ximin Luo wrote: On 09/04/14 07:29, David Fifield wrote: It gets the job done, but it sucks because the first thing you see is the dialog and you have to know not to close it. Is there a way to accomplish the same thing (keep the browser running, but don't show a browser window) without raising a conspicuous dialog? You could play further with this: $ nc -l -p $ iceweasel -no-remote -p testing -chrome http://localhost: If there's no server listening, firefox opens up a can't connect page. Otherwise, it continues running, with no UI, past the 3-minute TCP timeout mark, even if you kill nc in the meantime. However, I am not sure if the process is actually responsive, or simply hanging waiting for an HTTP response - so you'll need to test that. Also, there's no guarantee that firefox will keep this behaviour. Firefox hangs around if I exit nc with ctrl-D. I assume that this means it closed the TCP connection, so the fact that Firefox is still running afterwards is a good sign. -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Using the HS protocol for unlinkability only
On 26/03/14 16:54, Michael Rogers wrote: Hi all, (Please let me know if this belongs on tor-talk instead of here.) I'm working on a messaging app that uses Tor hidden services to provide unlinkability (from the point of view of a network observer) between users and their contacts. Users know who their contacts are, so we don't need mutual anonymity, just unlinkability. I wonder whether we need everything that the Tor hidden service protocol provides, or whether we might be able to save some bandwidth (for clients and the Tor network) and improve performance by using parts of the hidden service protocol in a different way. First of all, we may not need to publish hidden service descriptors in the HS directory, because we have a way for clients to exchange static information such as HS public keys out-of-band. Second, we may not need to use introduction points to protect services from DoS attacks - we can assume that users trust their contacts not to DoS them. Third, we may be able to reduce the number of hops in the client-service circuits, because we don't need mutual anonymity. This isn't the first app to use hidden services for unlinkability, so I expect this topic's come up before. Are there any discussions I should look at before coming up with hare-brained schemes to misuse the hidden service protocol? Cheers, Michael A further requirement, as I understand from a previous discussion with Michael, is that the app can't assume that each user has a publicly-accessible IP/port. One idea (instead of using HS) is to simply have each user also be a normal (non-hidden) internet service, and have the contact connect to them through Tor. However, this doesn't address the IP/port issue. We could look into ICE as a NAT traversal technique, but it's far from clear whether this will work *through* Tor (i.e. to have the exit node run ICE with the contact). So another possibility is using a HS-like system, where the rest of the network provides the listenable ports. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] GSoC - Tor pluggable transports project
On 14/03/14 20:56, quinn jarrell wrote: Hi Tor Devs, I'm a computer engineering undergrad at University of Illinois Urbana-Champaign. I am interested in working on a GSoC pluggable transports project. I mainly code in python or common lisp and have worked with asynchronous programming before (albeit it was a java framework not twisted). The two projects I am interested in are the pluggable transports combiner or building the CBR transport plugin. I really like the idea of the transport combiner but I have a few questions about how far along the design/implementation is. I noticed that there is a prototype called obfs-flash proxy and am wondering if I would work on extending that to work with other plugins or if I would work on a separate combiner. Is the design set in stone yet or is the spec for it still shifting? And if the spec is still shifting, how should I address that in my GSoC proposal. The design is about half-way finished, though we haven't started a proper spec document yet. You would finish the design, #10061 [1] - basically handle the more complex cases, to generalise obfs-flash to more types of PTs. This won't be a trivial task - for example, note the last two comments by me and asn in that ticket, which will involve some careful thought to address. I have some rough notes and diagrams in [1] which give a brief description and justification of the design of the combiner-client. (We are favouring option 2, composition.) For the combiner-server, [3] is a similar sort of diagram. You may find that you will need to tweak these designs, to address all the issues that you may run into. The two immediate next tasks are: https://trac.torproject.org/projects/tor/ticket/9580 - fairly simple, you could do this as part of your proposal, to show us how your coding skills are https://trac.torproject.org/projects/tor/ticket/9744 - a bit more complex, partly done for obfs-flash-server already - but I imagine the syntax of the config file would change, as we work our way through #10061. We should also decide what to call it: https://trac.torproject.org/projects/tor/ticket/9743 We're favouring either fog or fogproxy, but we haven't made an official decision yet. We have IRC meetings every 2 weeks on Friday at 17:00 UTC, next one is on March 28th. Feel free to come along! [1] https://trac.torproject.org/projects/tor/ticket/10061 [2] https://github.com/infinity0/tor-notes [3] https://trac.torproject.org/projects/tor/attachment/ticket/9744/server-superproxy.jpg I also talked with Yawning in IRC who pointed me to the CBR transport plugin/unix like plugins to change data length/obfuscate data/add noise. I love the idea that with the combiner a CBR transport could be easily added to any existing transport but if the combiner has not been built/the spec is still developing, I wonder how useful it would be. Also the CBR plugin does not seem to be enough work for a GSoC project but I could definitely be underestimating the difficulty. I would like to work on either of these projects but I do not know which one would be the more helpful/necessary so i would love to hear your opinions on which would be better to work on. Both would be good, you can pick. The CBR plugin has no existing development work behind it (only research), which may or may not suit what you want to do. I am confident it will be enough work for a whole GSoC project however. obfs-flash is written in Python and Go, which may help you learn async programming faster. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] [tor-commits] [flashproxy/master] remove failed connections from proxy_pairs as well
On 10/03/14 17:02, David Fifield wrote: On Fri, Mar 07, 2014 at 02:39:18PM +, infini...@torproject.org wrote: commit 05b9c101ba9afe4653d1eff6f5414f90f22ef042 Author: Ximin Luo infini...@torproject.org Date: Fri Mar 7 13:39:31 2014 + remove failed connections from proxy_pairs as well - this is a pretty major fix, as the previous behaviour effectively disabled a proxy after 5 failed connections (= max_num_proxy_pairs / 2) Can you say some more about the circumstances of this fix? What behavior changed for you after making it? I can't reproduce what's described in the commit message. With 9dea0c6c90dedc868dfaa84add2bfa19a2039281, I set up a phony facilitator with ncat -v -lk 7000 --sh-exec ./facilitator.py where facilitator.py has the contents #!/usr/bin/env python import random import sys port = random.randint(1,11000) print HTTP/1.0 200 OK\r print Content-Type: application/x-www-form-urlencoded\r print Cache-Control: no-cache\r print Access-Control-Allow-Origin: *\r print \r sys.stdout.write(check-back-in=10client=127.0.0.1:+str(port)+relay=173.255.221.44:9901) I then browse to file:///home/david/flashproxy/proxy/embed.html?debugfacilitator=http://127.0.0.1:7000initial_facilitator_poll_interval=1unsafe_loggingmax_clients=2 All the client connections are going to closed ports, but the Client: error lines still appear and clients don't accumulate in the request URL. The same happens if I use an open port with a Ncat listener that just closes the connection. The same happens if I change to a non-listening Internet address like 1.1.1.1. 2014-03-10T16:28:48.173Z | Starting; will contact facilitator in 1 seconds. 2014-03-10T16:28:49.179Z | Facilitator: connecting to http://127.0.0.1:7000?r=1transport=websocket. 2014-03-10T16:28:49.203Z | Next check in 10 seconds. 2014-03-10T16:28:49.204Z | Facilitator: got client:{ host: 127.0.0.1, port: 10782 } relay:{ host: 173.255.221.44, port: 9901 }. 2014-03-10T16:28:49.205Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: connecting. 2014-03-10T16:28:49.206Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: connecting. 2014-03-10T16:28:49.207Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: error. 2014-03-10T16:28:49.208Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: closed. 2014-03-10T16:28:49.209Z | Complete. 2014-03-10T16:28:49.209Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: error. 2014-03-10T16:28:49.209Z | 127.0.0.1:10782|173.255.221.44:9901 : Client: closed. 2014-03-10T16:28:49.210Z | Complete. 2014-03-10T16:28:59.205Z | Facilitator: connecting to http://127.0.0.1:7000?r=1transport=websocket. 2014-03-10T16:28:59.226Z | Next check in 10 seconds. 2014-03-10T16:28:59.227Z | Facilitator: got client:{ host: 127.0.0.1, port: 10221 } relay:{ host: 173.255.221.44, port: 9901 }. 2014-03-10T16:28:59.227Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: connecting. 2014-03-10T16:28:59.227Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: connecting. 2014-03-10T16:28:59.229Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: error. 2014-03-10T16:28:59.229Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: closed. 2014-03-10T16:28:59.230Z | Complete. 2014-03-10T16:28:59.230Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: error. 2014-03-10T16:28:59.230Z | 127.0.0.1:10221|173.255.221.44:9901 : Client: closed. 2014-03-10T16:28:59.231Z | Complete. 2014-03-10T16:29:09.227Z | Facilitator: connecting to http://127.0.0.1:7000?r=1transport=websocket. 2014-03-10T16:29:09.249Z | Next check in 10 seconds. 2014-03-10T16:29:09.250Z | Facilitator: got client:{ host: 127.0.0.1, port: 10069 } relay:{ host: 173.255.221.44, port: 9901 }. Now what *does* seem to be happening, and what had me confused for a while, is that Firefox seems to be artificially delaying repeated WebSocket connections to the same port with some kind of exponential delay. I originally had a hardcoded port in facilitator.py, and the delay made it appear that something like what the commit message describes was happening. Since WebSocket connections eventually start waiting a long time to fail, it looks like they are not being closed. https://people.torproject.org/~dcf/graphs/firefox-exponential-delay.png is a screenshot showing what looks like exponential growth in the Network panel. Chromium doesn't appear to have the same issue; the requests are linearly spaced in the network pane. I get a different behaviour from you. I was using node-flashproxy with flashproxy.js from tag 1.6. Unlike in your logs above, I would get something like this, paraphrased: Client: connecting Client: connecting # after 3 minutes, the TCP timeout Client: error # no Client: closed and no Complete Then, the proxy_pairs stay in the list, and I can see subsequent conecting to lines grow longer and longer until they hit a length of 10. I think it is caused by the websocket implementation setting
Re: [tor-dev] Feasibility of using a Tor Browser plugin as a PT component?
On 22/02/14 04:08, David Fifield wrote: 2. Run a second browser, apart from Tor Browser, that receives commands from a client PT program and makes the HTTPS requests it is commanded to. You might want to look at MozRepl. More summary here: https://lists.torproject.org/pipermail/tor-dev/2013-November/005833.html I told gsathya about this and he had a brief look during NYC and thought it was potentially feasible, but AFAIK we haven't done a complete analysis of this yet. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Allowing NAT for relay/exit nodes - Bootstrap file size
This would be a nice-to-have, but not a priority for Tor. OTOH, that functionality is more vital for i2p, who are exploring the idea of integrating into Tor's PT system: https://trac.torproject.org/projects/tor/ticket/10629 Also, right now, no PT servers can actually traverse NAT. In the future, we plan to add WebRTC capability to flashproxy, which will include NAT traversal: https://trac.torproject.org/projects/tor/ticket/5578 If you want to see it done faster, feel free to help us/them out, or find someone where I can apply for funding for it. X On 20/01/14 19:24, Juan Berner wrote: Yes, but the point of flash proxies, is to use them as bridges, what I meant is to allow OR's behind NAT to be relays or even exit nodes. 2014/1/20 David Fifield da...@bamsoftware.com On Mon, Jan 20, 2014 at 05:00:38PM -0200, Juan Berner wrote: 1) Allow NAT clients to be TOR relay nodes (even maybe exit nodes) , this would be done using a queue system, possibly in a hidden service but not necessary, where nat relay nodes can query what tor clients want to connect to them and initiate the connection. This would allow more nodes in the TOR network. This is how flash proxy works. Clients register themselves as needing a connection, and then proxies connect to the clients. (The problem is that many *clients* are also behind NAT, and then it doesn't work so well.) You can run a flash proxy just by going to a web page like http://crypto.stanford.edu/flashproxy/, and there is also code to run a proxy in the background without a browser: https://trac.torproject.org/projects/tor/ticket/7944. David Fifield -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Projects to combat/defeat data correlation
I imagine the anonymity set would be much smaller for these combined transports... fewer people using them. In my understanding, the anonymity set doesn't apply to use of PTs since this is only at the entry side. The exit side does not know[1] what PT the originator is using, so is unable to use that information to de-anonymise. [1] at least, in theory should not know, perhaps someone can check there are no side-channels? would be pretty scary if exit could work out that originator is using PTs. ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
[tor-dev] exit-node block bypassing
Hey all, Flashproxy[1] helps to bypass entry-node blocks. But we could apply the general idea to exit-nodes as well - have the exit-node connect to the destination via an ephemeral proxy. The actual technology probably needs to be different since we can't assume the destination has a flashproxy (websocket/webrtc) PT server running, but we could probably find a technical solution to that. However, I talked this over with a few people and there might be legal and security issues. A few points: - running an exit node carries a great risk, it would be bad/unethical to let ephemeral proxy runners take this risk - (for security reasons we don't fully understand) there is a process for trusting exit nodes and/or detecting misbehaviour (I see badexit emails from time to time). this would be made much harder if exits were ephemeral. - someone could create a massive number of ephemeral exit nodes and capture a lot of exit traffic, giving them extra data to de-anonymise people. I was wondering if any of these have been discussed in depth before already, or if the general topic of exit-node block bypassing is something to be explored. X [1] http://crypto.stanford.edu/flashproxy -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] exit-node block bypassing
On 31/12/13 12:35, Jeroen Massar wrote: On 2013-12-31 12:07, Ximin Luo wrote: Hey all, Flashproxy[1] helps to bypass entry-node blocks. But we could apply the general idea to exit-nodes as well - have the exit-node connect to the destination via an ephemeral proxy. If an exit node is blocked towards a certain site, that exit node should define a policy stating that it is blocked by that destination. (DirAuths could maybe be made to add extra details like that?) If an exit node is useless it is a bad exit and should not be used at all, that is, shutdown. This is an unrelated topic from my original post. I am asking whether trying to implement an anti-exit-node-blocking-system would be A Good Thing To Do. For your 'flashproxy' case, it would just mean 'moving' the exit node to the new exit IP btw. You would thus only be shifting the problem. Those new IPs are ephemeral and unpredictable, therefore not feasible to block. See the flashproxy page on how it works; a few tweaks are needed to make it work for exits, but it's fairly straightforward to do so. But this is also an unrelated topic. I am less interested in getting it to technically work (because I am convinced it *will* work), but rather on whether it is a good idea or not. -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Slight obfsproxy API change (#10342)
On 12/12/13 23:40, George Kadianakis wrote: David Stainton dstainton...@gmail.com writes: Excellent! I was thinking of making this change but lately I haven't had much time. Merging that patch specified in the 1st ticket comment? That looks good. I'd be happy to update the bananaphone transport to use the new api! Cheers, David The change is now merged and pushed :) When updating your transport, don't forget to - call super(YourTransport, self).__init__() in your own __init__ - rename handshake() - circuitConnected() as well as renaming the local circuit params. X -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Transport composition
Hey Kevin, to get you updated on what we've discussed so far, you could try to build the diagrams from this repo: https://github.com/infinity0/tor-notes/blob/master/pt-compose.rst The build-dependencies are short and listed in the Makefile. There is also a sketch at the bottom of #9744: https://trac.torproject.org/projects/tor/ticket/9744#comment:3 For simplicity, we are only considering the case where, for a compsition chain of PT[0]..PT[n], every element except PT[n] makes one single outgoing stream to an address specified by the previous element. This excludes a chain that e.g. contains flashproxy in the middle. Our current preferred design would require minimal changes to the Tor PT spec. However, we haven't considered potential performance bottlenecks. X On 19/11/13 20:15, Kevin P Dyer wrote: Hi George, Maybe I'm missing something from the discussions that happened eight months ago at the dev meeting. (as per the initial comment in [1]) However, I guess I'm a bit confused about the motivation. Just to be clear, the goal is to be able to combine multiple transports easily, right? For example, we may want a transport that has the DPI-resistance of obfsproxy, but the address diversity of flashproxy. My main concern is that a general composition framework is going to add uneeded complexity to the interface between Tor and the pluggable transports. I understand the long-term benefits to being able to compose pluggable transports, but my concern is that it won't work well in practice, will be a nightmare to manage/deploy/develop, and will have irreconcilable performance bottlenecks. I think pluggable transport composition will be a good topic to discuss at the PT standup on Friday. To get my head around the current design, it would be great if we could discuss a few use cases beyond obfsproxy+flashproxy. -Kevin [1] https://trac.torproject.org/projects/tor/ticket/7167 On Sun, Nov 10, 2013 at 3:43 AM, George Kadianakis desnac...@riseup.net wrote: Hello Kevin, If you are interested in learning more about the transport combiner idea we were recently discussing, check out trac tickets #10061, #9744 and #7167. It would be awesome if you could comment with any ideas or criticisms you have. Cheers! ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Development of an HTTP PT
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 17/11/13 14:22, dardok wrote: Hi, I've been reading about Selenium web-browser driver thing and I consider that is not very handy to do what an HTTP PT client side needs, that is to forge HTTP requests and embbed the TOR traffic into these HTTP requests. It is more oriented to emulated a web user interaction with a real browser (such as firefox, chrome, opera, ie, ...) and the interaction and testing of web apps from these browsers. Also I cannot see how to handle the responses from the server-side using this thing. Few functions seem to be interesting regarding HTTP protocol handling and manipulation, as I said before most of the functions present are related with the user interaction. The Python version can be installed easily: pip install selenium The already implemented functions can be read on this file: /usr/share/pyshared/selenium/selenium.py I didn't find any useful function to allow a HTTP PT (client side) embed the TOR traffic into HTTP requests (maybe some cookie function related) and extract the information sent by the HTTP PT server side (maybe some get_body_text() or get_text() funtion). So I am ready to consider that this option is not be useful to implement a HTTP PT client side. Anyway, I would like to discuss this point with someone interested in this topic. Hey dardok, I am not an expert with any of these technologies at all, but perhaps you could look into this? https://addons.mozilla.org/en-us/firefox/addon/mozrepl/ It's an extension you install that then lets you telnet-connect to it and control firefox - which means you could do that from a PT. (There might be an even cleaner interface for programmable access, telnet is just what's advertised on the front page.) https://github.com/bard/mozrepl/wiki/Tutorial This shows an example on how to visit a website (the remote bridge maybe) and get information out of the resulting page. X - -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJSirUsAAoJEIYN7zuPZQt5yGcQAJbQjkL2AXOwuGD5T4Zluop2 fF3PQW3MmUPmazlC7YmzNzjIvGEuBzkG4qc7wmCjo+rHuJqkDADMNlomdp9zxbjQ ziRntZS7b87skkZ+dXg+TK2z30zJlwcJc0SW5XnWo+v45GgxYhJ2UFGP0fxpYBI0 ZScFXXp2TIEkvwbqsewseW0K9w3/rYUv2wQova+izeVyqVkPlzg0B5zhpav2jbWq c0TqRDGZsY9oBIdh+GLiTO1V1/NZXFTNkfS1iLJhqqJYeosOWiuGQVOzc2w/7yN3 VxIG1dldbsIgjci59cc1/FgqdrdTdLwwBcp9HexRI8jrLLwbDBpY61OgchTBhlWf EnrsIZZCIAZzAcPGqwsBTFGXxVcTjwtEokPZdDORcrBHqv3DNKKgvRcT+aIX+/AH dTinyRaaOh1c5F+6wuddDqd1sgLxdaiuZRNmqmGa4gX7ZXxs+SwlB+9nVWyv6r+e ZT6vxinKrjPeC4vCRIpRISsi1T6bogeugsX9+iE43f1vh2HrTL+AuxStvFKCy55L fXVhItmvI6VbyI5XpRsj+YuLLmpRB8+j7/W6Jg5Gc1/uFbwpIH+mxBKbUV9hE47F FOmq0qIftAhGOQAFXjdczooHSJ1AZeiVkFh6gW/6lbmnjga9NH8eVMIuGPjvA8yk 3JWGDkyUUToIMSN6WYGS =tMSt -END PGP SIGNATURE- ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] recreated website png diagrams as svg
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 11/11/13 14:10, Lunar wrote: Justin Findlay: Item h under https://www.torproject.org/getinvolved/volunteer.html.en#OtherCoding discusses (at least) the images at this page, https://www.torproject.org/about/overview.html.en which look as if they're copied from some EFF presentation. I've recreated them as svg. Svg being a fairly standard vector markup, I supposed that it would be more amenable to automatic string replacement than a png (I'm not sure what wml is). Please look them over and give me feedback. If they are ultimately found acceptable, you are welcome to use them for the site. Thanks. I think they look great. :) But something has always bothered me with these diagrams: they make it look like Tor relays are run on everyone's computer. With Justin's revision, it looks like they are run on laptops. The vast majority of Tor relays are run on stable servers and most of the traffic flows through high capacity servers, hosted in datacenters behind high capacity links. So I think picturing a laptop might be misleading? Whilst we're on that topic, labelling the final link from the exit node to the destination as unencrypted is unnecessarily scary as well Perhaps we could reword it to encrypted if destination service is encrypted and the other links to always-encrypted. X - -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJSgOzNAAoJEIYN7zuPZQt5JKYQAJh6cInq48ai28RKzd56PQYc B5mCFr44qwlcY9yS8wZs41sVJB2RJZ9zoOKm0Bb5N1SQpWUjoYK3vaL3cNYi1i9v tdy8bZkKDdjFFUoJoF4x1uE7ggExGSBdpvnmddyGNOdKJtChRx1z1R8relczbcwB Rz8Lu7RwYt7FyV8GRe+799BAfWnLS4aZZ8v4Sza8c6BPhhsjOiyADBLMEN1JCqki YaGMrIA6FAYOmnU3cNLPrfytrfukwMCrwnnWuqEspADhRkbWUp5qygifqBRW8SIf wOwMw5wCwYouqVx1F4ZUeLvx3+roiOmFNS6HbanwpaJTukgnwGSLZGuBFk6BgnUs qwucCbT0FM7UFvdSoP/vzrAu4uALS24V8RSpFqaG9Rt8Kv3Jyx21m8TlaVEC0QTZ THitLb2iywPK9vGVK8DyNcD8DQEqr6IWJPQNVAarpmmJCsfFj5suD6WjKIE0b4sw nL25QKOLYiKy2eu1SH42z4Yx9tGqry4wHwuKHGEvCJur7BnmNaArsXSl44x90kcy AvYZMtF3zwsR06aF4iil5WRFv5EY4b47+rdX8pcAVo+VDdZj90IIP0U27ybyAsuU ttfD2v6nTqhBzC3oSmH9Mo1RCDrFWHPKuqkInJY6Wl5qaffGEIHn6Z453/Ne53IS fi9MnLV6KoW6mezGWpGD =CI5T -END PGP SIGNATURE- ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Sponsor F: update; next meeting [in *two weeks*]
On 20/10/13 07:02, David Fifield wrote: It's not that I'm trying to neglect unglamorous development--I'm really not. This is how I see it: As a software engineer, I am always trying to reduce risk. Refactoring code reduces risk in the long term but increases risk in the short term. Sticking with our old busted duplicated code is risky in the long term, but very predictable in the short term. We've already done a lot to destablize the code (just by adding new features), so if possible I prefer not to further destablize it before trying to build bundles. The risk associated with refactoring is one I think can be safely deferred for a couple of weeks. Does it at least come across that there is some principled reasoning behind my recommendations? I'm fully with you on doing code quality improvements before WebRTC and transport composition. OK, I can get along with this. -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Sponsor F: update; next meeting [in *two weeks*]
On 19/10/13 06:31, David Fifield wrote: On Mon, Oct 14, 2013 at 09:14:25PM +0100, Ximin Luo wrote: Specific remaining tasks: - merge #9349, #6810, #9974 - push #7167 to an official tpo repo - do #9976, and #7167#comment:42 might require an obfsproxy fix I agree with this, except that I don't think #9974 (facilitator packaging) and #9976 (general argument passing to registration helpers) are necessary for this deliverable. They are nice and I want them, but in terms of this deliverable I would prioritize #9349 (facilitator transport support) and #7167 (obfs–flash composition) first. Would you hate me if I suggest not merging #6810 (client code reorganization) until after we build at least one bundle? It's lower risk to go with the organization we know works, especially given that we are changing other things within the bundle. Strictly speaking, I *would* consider them part of the deliverable, from the view of software quality. Not having them, would be a minimal outcome IMO. If we don't consider it part of this deliverable, which deliverable *are* we supposed to consider it part of? These arguments could be made at any time. If this is an isolated case then fine, but I don't want to see a pattern where unsexy sub-surface work is repeatedly neglected. We're not trying to capture a market so there's no need for just good enough tactics. That would be analogous to shoddy construction work / software engineering that looks ok and works ok until it collapses in an earthquake or gets your mass user-base auto-exploited, or in our case if someone does more work on top of flashproxy (#6810 fixes this) or wants to use a client with a different facilitator (#9976 would fix this) or wants to install a new facilitator (#9974 fixes this by greatly lowering the cost). (The last few are pretty important if we don't want a central point of failure.) If you de-prioritise that work now to make a deadline, you must treat them with highest priority afterwards, before taking on newer features like webRTC or general PT composition. (As opposed to e.g. the previous cycle where we started with #7167, a big new feature.) Especially #6810 - I consider that to be paying off technical debt incurred from the previous cycle, so this isn't even de-prioritising, it's pushing back the correction of what should have been corrected already. -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Sponsor F: update; next meeting [in *two weeks*]
On 01/10/13 03:13, Tom Lowenthal wrote: Combine obfuscation (obfsproxy) with address-diversity (flashproxy) [#11] --- **[On Track: Minimal]** The work of integrating obfsproxy with flashproxy is done. George will include this in the next released version of the pluggable transports browser bundle. George will also write a report on this work. Ximin and David will help. By Halloween, the report will be complete and the bundles will either be released or well on their way through testing. This likely represents some combination of minimal or intended outcomes for this deliverable. We have deployed and tested working instances of the combined obfs3/flashproxy transport. This includes all components - the client, server, facilitator and browser proxy - and they are all backwards compatible with the old components that only support the plain (non-obfs3) websocket protocol. Soon (this week) the code will be merged. There are a few other issues to iron out before a production-quality release. We may be able to do it by Halloween, but I'd prefer not to rush through them just to make that date (if that is fine for the sponsor). I'm more certain it will be production-quality sometime mid-November. I haven't considered PTBB packaging and I'm not sure about the work needed, but if George does this in parallel with me making the code production-quality, the timings should line up as expected. Specific remaining tasks: - merge #9349, #6810, #9974 - push #7167 to an official tpo repo - do #9976, and #7167#comment:42 might require an obfsproxy fix Other less-vital things which improve robustness/quality: - connection reliability under churn: #9964, #5426, #8285 - flexibility of ecosystem: #9942, #9949, #9965 - various other code-quality issues, all on trac, see [1] We've also made progress on the long-term goal of general PT composition, mentioned in the the intended/ideal outcomes, as #9744: - solid and complete ideas in place - start of precise design specs - can now see a full path to eventual implementation [1] Full list of tickets; though about half are not relevant to this deliverable. https://trac.torproject.org/projects/tor/query?status=!closedcomponent=Flashproxyorder=id -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] Pluggable transport weekly meeting
I assume people will be interested in creating Debian packages for these too. I am wondering if we should adopt a naming convention like tor-pt-sshproxy, tor-pt-flashproxy, tor-pt-obfsproxy etc - like how mozilla extensions are all called xul-ext-*. (We could even start a working group too.) ATM this would involve renaming obfsproxy, but I am about to package flashproxy, and thinking about what to name the package. X On 06/09/13 13:44, Griffin Boyce wrote: That day and time work well for me -- thanks for setting this up! =) ~Griffin On 09/06/2013 04:58 AM, Vmon wrote: I sent this email quite a while ago and I was surprised that nobody was interested/replied. Today I found out that I had sent it to a wrong address. But here we are, so I'm sending it again. So please reply so we can kick this off soon. Thanks, Vmon -- Forwarded message -- From: vmonmoonsh...@gmail.com mailto:vmonmoonsh...@gmail.com Date: Wed, Aug 21, 2013 at 6:46 AM Subject: Pluggable transport weekly meeting To: tor-dev-requ...@lists.torproject.org mailto:tor-dev-requ...@lists.torproject.org Hello Tor devs, Following up on what we came up with in the dev summit (https://trac.torproject.org/projects/tor/wiki/org/meetings/2013SummerDevMeeting/PluggableTransports1), we are going to have weekly 30-minute IRC meeting focusing on pluggable transports. The format (I think) will be scrum-esque that every developer who is working on a pluggable transport will update everybody else about the work they did/are doing on their transport during the week and ask questions if they have any, for example if they got stuck somewhere and they think somebody can help. Preliminary, we decided to have the meeting on Fridays, cause why not, but if you have serious problem with Fridays then we might be able to pick a better day. For the time of the meeting, considering the geographical positions of the current transport developers, we'll probably end up having a CEST evening and PST morning meeting. Having that in mind I suggest: CEST: 18:00 BST (Summer GMT): 17:00 EST: 12:00 MNT: 10:00 PST: 9:00 So if this doesn't work for you, please reply to this email with your alternative proposal. Thanks, Vmon ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev -- GPG: 4096R/1318EFAC5FBBDBCE git://github.com/infinity0/pubkeys.git signature.asc Description: OpenPGP digital signature ___ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
Re: [tor-dev] RFC: obfsproxyssh
On 28/06/13 10:13, Yawning Angel wrote: Hello, I have been talking about this in #tor-dev for a while (and pestering people with questions regarding some of the more nuanced aspects of writing a pluggable transport, thanks to nickm, mikeperry and asn for their help), and finally have what I would consider a pre-alpha for the PT implementation. obfsproxyssh is a pluggable transport that uses the ssh wire protocol to hide tor traffic. It uses libssh2 and interacts with a real sshd located on the bridge side. Behaviorally it is identical to a user sshing to a host, authenticating with a RSA public/private key pair and opening a direct-tcp channel to the ORPort of the bridge. What sort of PKI are you using to verify the pubkey claimed by either side, to prevent MitM? SSL has the hierarchical PKI of X.509, whereas SSH does not have a standard PKI. Do you know about monkeysphere? (Generally, I prefer SSH to SSL because it authenticates both ways by default. But you must have a good PKI in mind to prevent MitM.) It is more aimed at non-technical users (because anyone with an account on a bridge can create a tunnel of their own using existing ssh clients), and thus can be configured entirely through the torrc. It still needs a bit of work before it is ready for deployment but the code is at the point where I can use it for casual web browsing, so if people are interested, I have put a snapshot online at http://www.schwanenlied.me/yawning/obfsproxyssh-20130627.tar.gz Great, please let us know when it's easier to use! Note that it is still at the I got it working today state, so some parts may be a bit rough around the edges, and more than likely a few really dumb bugs lurk unseen. Things that still need to be done (in rough order of priority): * It needs to scrub IP addresses in logs. * I need to compare libssh2's KEX phase with popular ssh clients (For the initial production release more than likely Putty). It currently also sends a rather distinctive banner since I'm not sure which client(s) it will end up mimicking. * I need to come up with a solution for server side sshd logs. What I will probably end up doing is writing a patch for OpenSSH to disable logging for select users. * In a peculiar oversight, OpenSSH doesn't have a way to disable reverse ssh tunnels (Eg: PermitOpen 127.0.0.1:6969 allows clients to listen on that port). Not a big deal if Tor starts up before any clients connect, but I'll probably end up writing another patch for this. * Someone needs to test it on Windows. All of my external dependencies are known to work, so the porting effort should be minimal (Famous last words). * The code needs to scrub the private key as soon as a connection succeeds in authenticating instead of holding onto it. Probably not a big deal since anyone that can look at the PT's heap can also look at the bridge line in the torrc. Nice to haves: * Write real Makefiles instead of using CMake (I was lazy). src/CMakeLists.txt currently needs to be edited for anyone compiling it. * It currently uses unencrypted RSA keys. libssh2 supports ssh-agent (on all of the relevant platforms) so key management can be handled that way. I do not think there is currently a mechanism for Tor to query the user for a passphrase and pass it to the PT, but if one gets added, support it would also be easy from my end. * The code does not handle IPv6 since it uses SOCKS 4 instead of 5. When Tor gets a way to pass arguments to PTs that are 510 bytes, I will change this. * libssh2 needs a few improvements going forward (In particular it does not support ECDSA at all). * Code for the bridge side that makes the tunnel speak the manged PT server transport protocol would be nice (For rate limiting). * libssh2_helpers.c should go away one day. Not sure why the libssh2 maintainers haven't merged the patch that I based the code in there on. Things that need to be done on the Tor side of things: * 0.2.4-14-alpha does not have the PT argument parsing code, so this requires a build out of git to function. * The code currently in Git fails to parse bridge lines with arguments that can't be passed via SOCKS 5 (size restriction). The PT tarball has a crude patch that removes the check, but the config file parser needs to be changed. * The Tor code currently in Git likes to forget PT arguments. asn was kind enough to provide me with a patch that appears to fix this (though the PT has a workaround for when it encounters this situation), but moving forward a formal fix would be ideal. * All the PT related cleverness in the world won't do much vs active probing if there is an ORport exposed on the bridge. Tor should be able to handle ORPort 127.0.0.1:6969 (It may currently work, I'm not sure. There should be a way to disable the reachability check if only to reduce log spam). Open questions: * Is this useful? * Is it worth biting