Re: Scroogle and Tor

2011-02-14 Thread Robert Ransom
On Mon, 14 Feb 2011 20:19:50 -0800
Mike Perry mikepe...@fscked.org wrote:

 2. Storing identifiers in the cache
 
 http://crypto.stanford.edu/sameorigin/safecachetest.html has some PoC
 of this. Torbutton protects against long-term cache identifiers, but
 for performance reasons the memory cache is enabled by default, so you
 could use this to differentiate crawlers who do not properly obey all
 brower caching sematics. Caching is actually pretty darn hard to get
 right, so there's probably quite a bit more room here than just plain
 identifiers.

Polipo monkey-wrenches Torbutton's protection against long-term cache
identifiers.


Robert Ransom


signature.asc
Description: PGP signature


Re: Yet another UDP / DNS quiestion...

2011-02-13 Thread Robert Ransom
On Sun, 13 Feb 2011 18:50:19 +
Tomasz Moskal ramshackle.industr...@gmail.com wrote:

  I wonder why your uid should be different everytime you reboot, but you
  can also use the name of the user instead of the numerical value.
  
 Well I can't tell you why but that how it is. To double check I rebooted
 twice just now and ps -A | grep -w tor each time gave me different UID
 for tor.

That's a process ID, not a user ID.


Robert Ransom


signature.asc
Description: PGP signature


Re: IP address blocked on certain site

2011-02-03 Thread Robert Ransom
On Thu, 03 Feb 2011 22:21:34 -0500
Aplin, Justin M jmap...@ufl.edu wrote:

 On 2/3/2011 8:28 PM, Joe Btfsplk wrote:
  I am using Torbutton.  It is supposed to Torrify Firefox - yes?
 
 In a roundabout way, yes. Torbutton forwards Firefox traffic to Polipo, 
 which in turn sends the traffic to the SOCKS port of Tor. Disabling 
 Torbutton and entering the Tor SOCKS information into Firefox's network 
 configuration would skip the Polipo part, and eliminate any problems you 
 might be having with some hidden Polipo cache.

Turning off 'Use Polipo' in the Torbutton Preferences dialog would be
easier and much safer.


Robert Ransom


signature.asc
Description: PGP signature


Re: [scrubbed].onion and log level

2011-02-03 Thread Robert Ransom
On Thu, 3 Feb 2011 23:39:40 -0500
cmeclax-sazri cmeclax-sa...@ixazon.dynip.com wrote:

 I have a friend here and we're trying to debug a TorChat connection. We had 
 it 
 working once, then I'm trying to talk him through editing his config. He 
 doesn't show up on my TorChat, and I show up as a blue ball on his. So I 
 looked at the log file to see what's happening. It says Tried for 120 
 seconds to get a connection to [scrubbed]:11009. The problem with that is, I 
 have other people in my buddy list, at least one of whom isn't on line, and I 
 have no way of knowing when it's my friend that I'm having trouble connecting 
 to. So I changed the log level to info, then to debug, reloaded Tor, then 
 restarted it. I still get [scrubbed], even in the info messages, and the 
 debug messages are way too much to wade through. How can I (temporarily) tell 
 Tor not to scrub the hidden services?

Add:

SafeLogging 0

to your torrc.


Robert Ransom


signature.asc
Description: PGP signature


Re: Tor 0.2.2.22-alpha is out

2011-01-30 Thread Robert Ransom
On Sun, 30 Jan 2011 12:48:02 +0330
Hasan mhaliz...@gmail.com wrote:

 *I have download the new version from
 https://www.torproject.org/download/download but still i can't connect to
 tor!! :(*

Tor 0.2.2.22-alpha contains 'a slight tweak ... that makes *relays and
bridges* that run this new version reachable from Iran again' (emphasis
added).  Running it as your client will not help you.

You need to find a bridge that is running 0.2.2.22-alpha, or find a
relay that is running 0.2.2.22-alpha and configure it as a bridge.


 *My IP Add:  [DELETED]
 *

You should not have published your IP address.  It is quite easy for
your government to use your IP address to identify you and punish you,
and no one on this list can use your IP address to help you.


Robert Ransom


signature.asc
Description: PGP signature


Re: Is gatereloaded a Bad Exit?

2011-01-30 Thread Robert Ransom
On Sun, 30 Jan 2011 10:33:31 +0100
Jan Weiher j...@buksy.de wrote:

  At some point, we intend to shrink exit policies further as Tor scales
  to more decentralized schemes. Those exit policies will likely be
  represented as bits representing subsets of ports. When that time
  comes, we will very likely combine encrypted and unencrypted versions
  of ports together, removing this option entirely.

 Sounds good. But what to do for now? Just creating a list of nodes which
 only allow unencrypted traffic and put them into the ExcludeExitNodes
 list? Shouldnt these nodes be excluded by default?

They will be now.

The exit scanner detects such nodes, and Mike Perry has just made it
easier to mark nodes with suspicious policies with the BadExit flag in
the future:

https://gitweb.torproject.org/torflow.git/commitdiff/2320961a05e3277534887c7f76036c826a879230


Robert Ransom


signature.asc
Description: PGP signature


Re: Question and Confirmation.

2011-01-30 Thread Robert Ransom
On Sun, 30 Jan 2011 22:33:21 +
Matthew pump...@cotse.net wrote:

 On 30/01/11 02:32, and...@torproject.org wrote:
  On Fri, Jan 28, 2011 at 11:29:25PM +, pump...@cotse.net wrote 2.3K 
  bytes in 53 lines about:
  : My understanding is that Tor encrypts both the content of a data
  : packet and also the header.  It encrypts the packet and header three
  : times on the client (my computer) and then at each node one layer is
  : decrypted until the data packet and header are decrypted to
  : plaintext at the final exit node (except when TLS is used).  Right?
 
  Actually, tor wraps the original traffic in encryption and tunnels it
  through the 3 hops of a circuit.  We do not touch the original data.

 SorryI'm not trying to be dumb but I'm unclear how your answer differs 
 from my assumption.
 
 Tor takes all the data (header and content), encrypts it three times on the 
 client (me), and then at each node one layer is unencrypted OR is it all of 
 it unencrypted at the exit node?

Each relay removes one layer of encryption.

Tor does *not* encrypt and send packet headers.  Tor only relays the
data within a TCP connection.


Robert Ransom


signature.asc
Description: PGP signature


Re: Polipo bug reporting

2011-01-30 Thread Robert Ransom
On Sun, 30 Jan 2011 22:59:49 +
Geoff Down geoffd...@fastmail.net wrote:

 how do I report a bug with the Polipo in
 https://www.torproject.org/dist/vidalia-bundles/vidalia-bundle-0.2.2.22-alpha-0.2.10-ppc.dmg
 ?
 And how do I tell which version is in there also please?

If that bundle contains a CHANGES file for Polipo, the last entry in it
is for the included version of Polipo.  

 ( I saw http://archives.seul.org/or/talk/Jan-2011/msg00161.html but it
 doesn't specify where the new bugtracker is).

We do not know of any new bug tracker for Polipo.  If you have a bug
report for Polipo itself, report it to the polipo-users mailing list
(see https://lists.sourceforge.net/lists/listinfo/polipo-users).


Robert Ransom


signature.asc
Description: PGP signature


Re: How to use Google Gadgets with Tor? - Is this possible?

2011-01-16 Thread Robert Ransom
On Sun, 16 Jan 2011 06:06:11 +
M moeedsa...@gmail.com wrote:

 On Sat, Jan 15, 2011 at 7:02 PM, Mike Perry mikepe...@fscked.org wrote:

  You could also install an addon to observe the requests your browser
  uses in both non-Tor and Tor accesses of this gadget to see if the
  requests appear different for some reason. That may help diagnose the
  cause:
  https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/
  https://addons.mozilla.org/en-US/firefox/addon/tamper-data/

 On a side note, i had asked the group before about the google gadgets and
 whether if there is some security issue with using it wit TOR I receive the
 response that it had not really been tested before. Should i understand its
 safe now?

If you are talking about the program called 'Google Gadgets', no, it
has not been audited, and it is unlikely to be safe to use over Tor.

This thread is about using Google gadgets embedded in a web page with
Firefox (and Torbutton).


Robert Ransom


signature.asc
Description: PGP signature


Re: Gmail saying cookies are turned off but they are not

2011-01-12 Thread Robert Ransom
On Wed, 12 Jan 2011 10:49:25 -0500
Praedor Atrebates prae...@yahoo.com wrote:

 OK, great.  I hadn't run into this issue until very recently so had no reason 
 to follow anything having to do with it.  Now the question is...where does 
 one go to change this hidden setting?  Where is the hidden setting hidden?

  Setting this hidden pref to false in about:config will fix the issue
 ^RIGHT HERE^

  for you. See ticket #2377 for info on patches/fixes:
  https://trac.torproject.org/projects/tor/ticket/2377


 
 
 
 On Wednesday, January 12, 2011 06:53:55 am you wrote:
  Thus spake Praedor Atrebates (prae...@yahoo.com):
  
   I am using my usual tor button + firefox to access a gmail account.
   I have generally had no problems but lately I try to log in and get
   a cookies are turned off and that I need to turn them on.  
   
   Cookies are NOT turned off, they are set to be treated as session
   cookies and they get wiped whenever I shut off firefox.  Perhaps
   there is a setting hidden away somewhere that I can check, whether
   in the tor button settings or firefox?  
  
  This is is a bug in Torbutton. It is caused by the hidden setting:
  extensions.torbutton.xfer_google_cookies.
  
  This setting was introduced to reduce the number of captchas that
  google presented, so that you would only have to solve a captcha once,
  instead of once per country code domain. It seems to be interfering
  with gmail logins when your country code changes.
  
  Setting this hidden pref to false in about:config will fix the issue
  for you. See ticket #2377 for info on patches/fixes:
  https://trac.torproject.org/projects/tor/ticket/2377
  
  Also, please read the archives more closely. This was *just* discussed
  in a different thread.
  
  
  
 



signature.asc
Description: PGP signature


Re: Tor-BlackBelt Privacy

2011-01-06 Thread Robert Ransom
On Thu, 06 Jan 2011 09:04:23 +0100
Karsten N. tor-ad...@privacyfoundation.de wrote:

 sorry, I forgott to send the torrc file. I leave out the values for
 Vidalia an post only the specific Black Belt Privacy values:
 
 CircuitBuildTimeout 10
 NumEntryGuards 10

These two lines might make Tor slightly faster, but will put far more
load on the Tor network.  The NumEntryGuards line will also make the
client more vulnerable to certain anonymity-set-reducing attacks.

 ConstrainedSockSize 256 KB
 ExcludeNodes IL
 ExcludeExitNodes IL

Either 'Cav' is seriously afraid of some node named IL, or he is
trying to protect his users from those $DEROGATORY_ADJECTIVE Jews and
didn't put in the curly braces needed to exclude all nodes in a
country.  (And didn't realize that Mossad can rent servers in other
countries.)


Robert Ransom


signature.asc
Description: PGP signature


Re: How does a ftp-server log real ip-address of client machine?

2010-12-29 Thread Robert Ransom
On Wed, 29 Dec 2010 21:12:36 +
Orionjur Tor-admin tor-ad...@orionjurinform.com wrote:

 I usually connect to my servers through the Tor.
 When I connecting with them through ssh or sffp I don't find any serious
 in logs of my server except ip--addresses or names of apropriate
 exit-nodes of the Tor.
 But when I connect with them through ftp immediate (without ssh) I can
 see real ip-addresses of my client's machine such the next:
 proftpd[32816] someuser (anonymizer2.torservers.net[174.36.199.200]):
 Refused PORT 192,168,1,5,203,191 (address mismatch)
 As I understand if my machine would not be behind NAT and have 'white'
 ip-address my anonimity would be compromised.
 How does a ftp-server log real ip-address of client machine? And how I
 can avoid it?

Your FTP client sent your IP address to the server.

To prevent your FTP client from sending your IP address to the server,
you need to use an FTP client that supports 'passive mode', at the very
least.  Setting 'passive mode' may or may not be enough; if you want to
make sure the FTP client you want to use won't leak information, audit
its source code.


Robert Ransom


signature.asc
Description: PGP signature


Re: Any way to secure/anonymize ALL traffic?

2010-12-26 Thread Robert Ransom
On Thu, 23 Dec 2010 09:21:08 -0500
Praedor Atrebates prae...@yahoo.com wrote:

 Got it now.  Now when I point to 127.0.0.1 I get places.  Now the question 
 is, how can one test whether or not their DNS is leaking?  There is the tor 
 status page that can tell you whether or not you are using tor but what about 
 something equivalent to test your DNS anonymity?

The transparent proxying firewall rules on the Tor wiki are intended to:

* not affect any traffic to or from Tor,
* redirect all other outbound TCP connections into Tor's TransPort,
* redirect all other outbound DNS packets into Tor's DNSPort, and
* drop all other outbound packets.

But the only way I know of to test whether your computer is leaking DNS
packets without disturbing your firewall configuration is to use a
packet sniffer.


Robert Ransom


signature.asc
Description: PGP signature


Re: Any way to secure/anonymize ALL traffic?

2010-12-26 Thread Robert Ransom
On Wed, 22 Dec 2010 17:10:32 -0500
Praedor Atrebates prae...@yahoo.com wrote:

 Would it be possible to have the VM change timezone in some 
 random/semi-random fashion so that any timezone (and other) info that could 
 be otherwise acquired would be just as unreliable an identifier of your 
 system/location as information acquired from a tor session?

Maybe, but it would be better to set the time zone to US Eastern Time
(America/Detroit on at least glibc-based Linux distributions), so that
you'll blend in with English-speaking T(A)ILS users.


Robert Ransom


signature.asc
Description: PGP signature


Re: tor is blocked in china

2010-12-26 Thread Robert Ransom
On Mon, 27 Dec 2010 10:41:26 +0800
Lu Wei luweit...@gmail.com wrote:

 Gitano wrote on 2010-12-24 3:23:
  On 2010-12-23 06:49, Lu Wei wrote:
  
  Only a little inconvenience is that bridge address must be entered
  digitally.
  
  You can also use the following Syntax:
  
 Bridge URL:portnumber fingerprint
  ***
  To unsubscribe, send an e-mail to majord...@torproject.org with
  unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
  
 I use vidalia bundle for windows version, on which the accepted syntax is:
 Bridge IP:port fingerprint
 So I have to do a nslookup every time before starting. What's more, the
 actual syntax that functions is:
 Bridge IP:port
 fingerprint cannot be present. I hear that it's because fingerprint
 checking is blocked.

The problem is that Vidalia forces Tor's 'UpdateBridgesFromAuthority'
option on.  When the UpdateBridgesFromAuthority option is on, and a
Bridge line contains a fingerprint, Tor contacts the bridge authority
to ask for the bridge's descriptor before contacting any bridges.

The safest thing to do is to use only Bridge lines containing
fingerprints, and turn off UpdateBridgesFromAuthority.  This way, Tor
will not contact the bridge authority, but will check the fingerprints
of the bridges it connects to so that it can detect man-in-the-middle
attacks.  Unfortunately, Vidalia will not allow you to configure Tor
that way.


Robert Ransom


signature.asc
Description: PGP signature


Re: glibc Errors for TBB 1.0.17

2010-11-28 Thread Robert Ransom
On Sat, 27 Nov 2010 21:51:00 +1000
cgp3cg cgp...@gmail.com wrote:

 Just upgraded from Tor Browser Bundle 1.0.14 to 1.0.17 for Linux i686,
 running on Debian lenny/5.0.6. Getting glibc errors:
 
 Launching Tor Browser Bundle for Linux in /path/to/tor-browser_en-US
 ./App/vidalia: /lib/i686/cmov/libc.so.6: version `GLIBC_2.9' not found
 (required by /path/to/tor-browser_en-US/Lib/libQtGui.so.4)
 ./App/vidalia: /lib/i686/cmov/libc.so.6: version `GLIBC_2.10' not found
 (required by /path/to/tor-browser_en-US/Lib/libQtNetwork.so.4)
 ./App/vidalia: /lib/i686/cmov/libc.so.6: version `GLIBC_2.9' not found
 (required by /path/to/tor-browser_en-US/Lib/libQtCore.so.4)
 
 Current installed version of glibc is 2.7 (standard Debian version). I
 guess this reflects a change in the build environment for TBB?

Yes, and it looks like a bug to me.  Added to Trac as #2225
(https://trac.torproject.org/projects/tor/ticket/2225).

 I run Tor from a USB drive, so the portable all-in-one Tor/Vidalia/FF
 bundle is excellent. Happy to build the TBB from source/components ...
 are there instructions for the process? Or some other way around the
 problem?

See https://gitweb.torproject.org/torbrowser.git for the build
scripts, but we would prefer to fix this bug.


Robert Ransom


signature.asc
Description: PGP signature


Re: Do I need an updated .torrc file?

2010-11-24 Thread Robert Ransom
On Mon, 22 Nov 2010 21:51:16 +
Matthew pump...@cotse.net wrote:

   Hello,
 
 My .torrc file says:
 
 ## Configuration file for a typical Tor user
 ## Last updated 12 April 2009 for Tor 0.2.1.14-rc.
 ## (May or may not work for much older or much newer versions of Tor.) 
 
 Do I need to get a new .torrc version?  I have had a look online and cannot 
 find a template.  I am using the latest version (0.2.1.26) so see no reason 
 to install from scratch.
 
 Any suggestions?  Thanks.

You only need a new torrc if your current one causes Tor to stop
working or emit warning messages.


Robert Ransom


signature.asc
Description: PGP signature


Re: Active Attacks - Already in Progress?

2010-11-24 Thread Robert Ransom
 exit node to the server can link the
connection at the server to your guard node for the circuit.  This will
not provide your IP address to the attacker.  However:

* An attacker who knows one of your guard nodes may be able to begin
  monitoring the guard node's incoming connections.
* An attacker who knows all three of your current guard nodes, can
  monitor the middle node on another Tor circuit, and sees that it
  originates from a guard node you are not currently using, can
  determine that you did not open that Tor circuit.  This is a
  surprisingly damaging attack, especially if your Tor client has
  chosen one or more low-bandwidth (and therefore relatively unpopular)
  guard nodes.


An attacker who monitors the TLS connection from you to your guard node
and the Tor circuit at your middle node may later gain access to logs
kept by the server you are accessing, match the IP address and times of
connections to the server with the times at which you opened TCP
connections through the Tor circuit, and thereby determine what
requests you sent to the server.


 What recourse do we have? Can someone more knowledgeable shed more light
 on this? 

There are several torrc options that you can set if you are afraid of
certain relays -- ExcludeNodes, ExcludeExitNodes, StrictNodes,
StrictExitNodes, NodeFamily, and perhaps others.  ExcludeExitNodes may
be useful if you find that an exit node is misbehaving and is not yet
flagged as a BadExit.  I strongly recommend that you DO NOT set any of
these directives, with the possible exception of ExcludeExitNodes, if
you are *very* certain that a particular exit node is actively
malicious.

All of the above options will change the probability distribution from
which your Tor client chooses circuits.  If you blacklist a few
low-bandwidth relays, you probably won't change the distribution
noticeably, but you won't improve your security noticeably, either.  If
you blacklist one or more of the major families of high-bandwidth Tor
relays and/or exits, you will change the distribution quite noticeably,
and you will make yourself quite distinguishable from normal Tor users.

The most obvious way in which choosing circuits from an unusual
probability distribution can hurt you is through the distribution from
which you choose exit nodes.  An adversary can capture and examine a
particularly sensitive server's logs, notice that someone is accessing
it only through relatively low-probability Tor exits, and then go look
for logs from less sensitive servers to which you might have given
information that readily identifies you.  If the adversary knows that
someone is posting information they want to suppress to a blog or forum
through low-probability Tor exits, and then finds that Fred Foobar
routinely accesses a shopping site through the same low-probability Tor
exits, Fred Foobar is in trouble.

There is another, somewhat less obvious way in which choosing circuits
from an unusual distribution can hurt you -- if you routinely download
large files from an adversary-monitored server through high-bandwidth
exit nodes, but low-probability, low-bandwidth middle nodes, the
adversary may be able to detect this fact from the server alone, and
use it to link your connections together.

I assume that choosing circuits from an unusual distribution can allow
other attacks as well.  In general, the Tor developers try to avoid
making different clients' circuit distributions distinguishable, and
would prefer that you not make your Tor client's circuit distribution
distinguishable yourself, even if there is no obvious way that your
particular change will allow an attack on your anonymity.


 * Of course, the well-organized attacker would go to the trouble to
 construct names that truly blended in with the Tor namescape - such
 as,MrSpudRelays, QueenAnnesRevenge, SteveKenpIsMyHero, and so forth.

The Adversary would like to thank you for providing those names.  They
will be *very* useful.


Robert Ransom


signature.asc
Description: PGP signature


Re: Tor 0.2.1.26-1~~lenny+1: segfault with libcryto.so.0.9.8

2010-11-19 Thread Robert Ransom
On Fri, 19 Nov 2010 09:44:47 +0100
Paul Menzel paulepan...@users.sourceforge.net wrote:

 Am Mittwoch, den 17.11.2010, 12:04 -0500 schrieb Roger Dingledine:
  On Wed, Nov 17, 2010 at 11:45:32AM -0500, Nick Mathewson wrote:
I noticed that Tor had crashed on my system. I am using Debian Lenny
with Tor 0.2.1.26-1~~lenny+1. The only thing I could find out about this
crash is the following line running `dmesg`.
   
   Without more information, there's not much info to go on there to
   diagnose the problem.  Generally, to debug a segfault, we need a stack
   trace.  To get one of those, make sure you're running Tor with
   coredumps enabled, and use gdb to get a trace if Tor crashes again.
  
  On Debian, you want to apt-get install tor-dbg, so you get the symbols
  for the Tor binary.
 
 I did so now.
 
   sudo aptitude install tor-dbg
 
 (Aptitude is the recommended package manager by Debian since Lenny.)
 
  You might even have a core file already sitting in your datadirectory,
  which I think is /var/lib/tor/
 
 Yes, I have. Two of them actually. They are 60 MB and 117 MB big. Is it
 safe to make them publicly available somewhere?

No.  The core dumps contain all session keys and secret keys which Tor
was using at the time, and those must not be disclosed.

 Are they of use for
 someone since no debug symbols were installed when the core dumps were
 created?

If you have installed the debug symbol package corresponding to the
version of Tor, yes, they are useful.  Use GDB or one of its frontends
to print a traceback from the core.  The traceback should be safe to
disclose.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-11-10 Thread Robert Ransom
On Wed, 10 Nov 2010 10:39:34 -0800 (PST)
Martin Fick mogul...@yahoo.com wrote:

 I have a question related to the tor client
 and hidden service protocol designs which
 may be relevant?  Can a tor client/hidden
 service sitting behind a NATting router
 query its router's internet facing public IP
 from other tor nodes?

Yes.  Current Tor relays send the IP address of the other node in a
NETINFO cell at the beginning of each TLS connection.

If so, could the
 protocol be changed to prevent this somehow?

No.  This would break both bridges and relays operated behind a NAT,
even with the ORPort forwarded to the internal IP address on which the
bridge or relay is listening.


Robert Ransom


signature.asc
Description: PGP signature


Re: Firefox,FireFTP,FTP etc. downloads anonymity.

2010-11-07 Thread Robert Ransom
On Sun, 7 Nov 2010 10:39:25 -0800 (PST)
Luis Maceira luis_a_mace...@yahoo.com wrote:

 When we read the Tor FAQs related to anonymity of ftp tranfers there are 
 several questions that come to mind:
 1)Is the most recent Filezilla really secure/tested so we can trust the 
 anonymity it provides through Tor?(something close to 
 Firefox/Privoxy/Torbutton?)

Supporting SOCKS4A is a good sign -- that means it might not leak DNS
requests -- but I don't think the Tor developers have reviewed it to
check for other anonymity and security issues.

 2)Using Firefox 3.6.12 and FireFTP 1.0.9(most recent versions) downloading 
 from Adobe(Flash Player,Adobe Reader,for example) I receive a warning that an 
 external application is being launched -accept or reject-,is this app 
 FireFTP?Is this warning coming from Torbutton?Does FireFTP 1.0.9 still leak 
 DNS requests?

Torbutton displays that warning when you start to download a file that
Firefox will not display itself.  Saving the file to disk with Firefox
itself (with no extensions) probably won't break your anonymity.  I
don't know whether using FireFTP to download the file will break your
anonymity.

However, if you run or open the downloaded file (whether after or
instead of saving the file to disk) on a computer that will ever again
be connected to the Internet, your anonymity can quite easily be
compromised (through unique identifiers hidden inside an executable
file, for example).

 3)When it comes to ftp through Tor and reading the Tor FAQs it seems the 
 better solution is 3proxy,however this software is not integrated on the 
 major (any?)linux distributions and downloading directly it must be done from 
 a russian website.The guys behind 3proxy can be really trustworthy but I do 
 not feel specially comfortable using the software.

What is 3proxy?


Robert Ransom


signature.asc
Description: PGP signature


Re: Vidalia GeoIP

2010-11-07 Thread Robert Ransom
On Mon, 08 Nov 2010 03:07:43 +
Geoff Down geoffd...@fastmail.net wrote:

 Hi,
 I don't use Vidalia much, so I can't say how long this has been the
 case, but the last couple of times I have started it up (with Tor
 already running) there has been no GeoIP data - no flags in the relay
 list, no lines on the map. I've not observed any calls to the GeoIP
 server either.
 Tor's log does say 'Parsing GEOIP file' at each startup.
  I'm using Tor 0.2.2.15-alpha/Vidalia 0.2.6 on OSX10.3 PPC

See
https://blog.torproject.org/blog/shutting-down-vidalia-geoip-mapping-server
and upgrade to Vidalia 0.2.10 .


Robert Ransom


signature.asc
Description: PGP signature


Re: Crypto for hidden services [was: TorFaq on https]

2010-10-29 Thread Robert Ransom
On Thu, 28 Oct 2010 21:13:34 -0700
Robert Ransom rransom.8...@gmail.com wrote:

 On Thu, 28 Oct 2010 22:06:03 -0400
 grarpamp grarp...@gmail.com wrote:

  is the server (hidden service)
   privacy threatened by using https too in any way?
  
   I don't see any risk to the server.
  
  Not particularly. Though it would add additional fingerprinting
  oppurtunities beyond Tor and the service themselves. This is
  the only one I can think of.
 
 I thought of this, but the hidden service private key would be enough
 of a giveaway.  Having a second private key around is no easier or
 harder to hide than having the first private key around.

Oh, you meant remote fingerprinting of the server's TLS stack.  I
didn't think of that, but I doubt that it's any worse than the HTTP
server's fingerprint.

I thought you were talking about fingerprinting a captured server,
because Tor is not supposed to leak (much) information about itself to
the other end of a circuit.


Robert Ransom


signature.asc
Description: PGP signature


Re: TorFaq on https for hidden services ( was: Hints and Tips for Whistleblowers )

2010-10-28 Thread Robert Ransom
On Thu, 28 Oct 2010 10:10:52 +0100
startx sta...@plentyfact.org wrote:

 hello.
 
 im starting this as  a new thread, as my question is only inspired by
 the discussion above.
 
 in the TorFaq
 ( https://trac.torproject.org/projects/tor/wiki/TheOnionRouter/TorFAQ ) 
 it says:
 
   Why is it better to provide a hidden service Web site with HTTP
   rather than HTTPS access? 
 
   Put simply, HTTPS access puts the connecting client at higher risk,
   because it bypasses any first-stage filtering proxy.. 
 
 
 the answer in the FAQ refers to privoxy. so i wonder now: is this
 answer obsolete meanwhile?

Yes.

or is it still the general recommodation to
 run hidden services without https?

I would recommend that hidden services not use HTTPS.  The Tor hidden
service protocol does an adequate job of authenticating servers and
encrypting traffic to them.  In addition, it is unlikely that any CA
that Firefox is configured to trust would issue a certificate for
a .onion hostname.

is the server (hidden service)
 privacy threatened by using https too in any way?

I don't see any risk to the server.

 the FAQ also says:
 
   These objections all apply to HTTPS, TLS, SSH, and generally all
   cryptography over Tor, regardless of whether or not the destination
   is a hidden service
 
 which i think is causing some confusion.

Yes, that is a bad sentence.


I think it's time to nuke that FAQ entry.  (Probably long past time to
nuke it.)


Robert Ransom


signature.asc
Description: PGP signature


Re: Firefox ctrl-shift-del vs. Torbutton

2010-10-28 Thread Robert Ransom
On Thu, 28 Oct 2010 20:57:24 -0400
grarpamp grarp...@gmail.com wrote:

 For the users who have checked all the c-s-d checkboxes and reviewed
 all the firefox.edit.preferences pages...
 
 For any given phase/method of browsing/usage, does torbutton clear
 any additional state beyond what c-s-d clears?

Torbutton clears TLS session resumption information out of the browser,
which is not listed in the ‘Clear Recent History...’ dialog, when the
user toggles between Tor and non-Tor browsing:


On Wed, 27 Oct 2010 16:41:57 -0700
Mike Perry mikepe...@fscked.org wrote:

 Thus spake Seth David Schoen (sch...@eff.org):
 
   Hi,
   I don't understand, too and in my opinion, this is utter nonsense. I'm
   not aware of any negative impacts on privacy due to the usage of
   https://,
  
  Session resumption can be used to recognize an individual browser
  that connects from different IP addresses, or even over Tor.  This
  kind of recognition can be perfect because the resumption involves
  a session key which is large, random, and could not legitimately
  have been known to any other browser. :-(
 
 This is not true if the user is using Torbutton. See the paragraph
 about security.enable_ssl2 in:
 https://www.torproject.org/torbutton/en/design/#browseroverlay
 
 This hack causes us to clear all TLS session ID and resumption state.
 It's bloody, but it works. Firefox has also created an official API
 for us to do this the right way that we will begin using in 1.2.6:
 https://trac.torproject.org/projects/tor/ticket/1624





 Particularly with regard to transmittable data [whether remotely or
 locally generated], as opposed to non-transmittable data that is merely
 cached such as images, etc.

The cache can be used to store pieces of HTML, CSS, and JavaScript
containing unique identifiers, which can then be transmitted back to a
server in various ways (even without JavaScript).


Robert Ransom


signature.asc
Description: PGP signature


Re: Crypto for hidden services [was: TorFaq on https]

2010-10-28 Thread Robert Ransom
On Thu, 28 Oct 2010 22:06:03 -0400
grarpamp grarp...@gmail.com wrote:

 or is it still the general recommodation to
  run hidden services without https?
 
  I would recommend that hidden services not use HTTPS.  The Tor hidden
  service protocol does an adequate job of authenticating servers and
  encrypting traffic to them.
 
 In the hidden service context for all below...
 
 Tor does NOT authenticate any particular underlying service [web, mail, etc],
 nor does it encrypt traffic to/from them.
 
 Tor merely authenticates and encrypts between two Tor daemons, one
 as a client and one as a HS.

Tor verifies that the hidden service's descriptor is signed by a private
key whose public key's truncated hash matches the hidden service
hostname.  For an HTTPS connection, your browser merely verifies that
some CA which the browser's developers have been paid to make users
‘trust’, whether directly or indirectly, has signed a certificate
claiming that the server's public key can be ‘trusted’ to serve a
particular hostname.  Tor's authentication of hidden services is better
than anything HTTPS can do.


 Give an elaborate setup behind a HS, perhaps tunneling the stream
 off the server, across the net, to other parties who terminate it on some
 daemon or cloud. Maybe some WikiLeaks form of submission/storage, or
 joining anon systems, or just a clueless HS admin.

A clueless HS admin can publish all requests which reach his server
onto the Internet.  A malicious HS admin can forward all requests to
NSA, CIA, FBI, Mossad, GCHQ, and whatever other entities are out to get
you.


 Or that someone is able to read the particular crypto Tor uses, but not
 the crypto your tunnel uses.

I'm slightly worried about this, but I currently don't see any tunnel
software in use that uses cryptographic algorithms that I consider
stronger than Tor's.


 Would you, or the provider of the intermediate or final services, not want
 that extra layer of protection just in case? Your bank in it's internal cloud?
 
 SSH/IRCS/SILC to behind a HS is an extra tunnel. It costs nothing. Were it
 still available, no one in their right mind would use ssh -c none.

HTTPS to behind a HS costs the user rather a lot of effort, for
minimal, if any, benefit.  Thus, I would recommend that hidden services
not use HTTPS.


  In addition, it is unlikely that any CA
  that Firefox is configured to trust would issue a certificate for
  a .onion hostname.
 
 Perhaps, and quite unfortunately, not. However, even though the
 chain would break on the hostname, it would still be of supplementary
 value if some dual-homed site of importance to the user ran with the
 same cert [fingerprint] as on the internet. Especially given that the
 prevalence of the below aside is presumed to be extremely low.
 
 [aside: As DNSSEC is not global yet, multi-homing a non onion cert would be
 on the same par as a bogus/stolen cert and mitm dns, for say your bank.]

I don't expect most users to verify SSL certificate fingerprints out of
band, whether ‘out-of-band’ means on the non-Tor Internet, over the
telephone network, or through the mythical DNSSEC.


 is the server (hidden service)
  privacy threatened by using https too in any way?
 
  I don't see any risk to the server.
 
 Not particularly. Though it would add additional fingerprinting
 oppurtunities beyond Tor and the service themselves. This is
 the only one I can think of.

I thought of this, but the hidden service private key would be enough
of a giveaway.  Having a second private key around is no easier or
harder to hide than having the first private key around.


These objections all apply to HTTPS, TLS, SSH, and generally all
cryptography over Tor, regardless of whether or not the destination
is a hidden service
 
 The whole, well we've got the anon system doing node to node
 encryption/auth, why bother with TLS... sounds an awful lot like
 why Johhny can't encrypt and why the internet still isn't encrypted.
 
 As there doesn't appear to be any real reason NOT to use crypto
 over top of any given anon system, might as well do it just in case.
 Foregoing extra 0-day's in crypto libs as applied, and the above
 fingerprinting... why pan it?

There is no real reason not to use another layer of cryptography on top
of Tor hidden services.  Using HTTPS, and convincing users to use
HTTPS, is far harder than merely using another layer of cryptography,
and provides no real benefit.


 And PKI, even amongst the anon, can be very useful thing. Communuties
 will be built, PKI will help. It's no different than the internet.

We have a PKI for hidden services already, designed into the protocol.
I do not expect piling HTTPS on top of that PKI to add any security at
this time.


Robert Ransom


signature.asc
Description: PGP signature


Re: TCP stack attack?

2010-10-23 Thread Robert Ransom
On Sat, 23 Oct 2010 12:42:11 -0700
Julie C ju...@h-ck.ca wrote:

 Has anyone come across any TCP stack implementation vulnerability research?
 I am interested in reading about what has been tried, and what has not been
 tried as yet. At this point in my education it strikes me that the TCP stack
 on any Tor node could be altered to do malicious things, and no one would
 ever know, or be able to know.

Roughly every attack that can be performed in a Tor node's TCP stack
can also be performed by anyone that can stick his own hardware between
the Tor node and the Internet.  There are some attacks that can be
performed there, but an attacker who can modify a Tor node's kernel
would be able to do more damage by reconfiguring or modifying Tor
itself.


Robert Ransom


signature.asc
Description: PGP signature


Re: Where does Tor get its relay lists from?

2010-10-16 Thread Robert Ransom
On Sat, 16 Oct 2010 07:02:10 -0400
hi...@safe-mail.net wrote:

 Every now and then, when you start Tor, it searchers for relays/descriptors. 
 And I've heard that it does that every now and then while it tuns as well.
 
 Does it get this list from a few static IP addresses that never change, 
 or does it pick randomly from thousands of IP addresses/dir lists out there?

https://svn.torproject.org/svn/projects/design-paper/tor-design.html#subsec:dirservers

In the current Tor network, the directory servers sign a ‘consensus’
listing and describing the currently known Tor relays, and most relays
serve copies of the consensus to their clients.


Robert Ransom


signature.asc
Description: PGP signature


Re: Hidden service: Is it possible for an attacker to break out of a VM?

2010-10-07 Thread Robert Ransom
On Thu, 7 Oct 2010 18:12:45 -0400
hi...@safe-mail.net wrote:

 Several people recommend running a hidden service from within a VM, 
 to prevent attackers from doing side channel attacks and reading off your 
 hardware components and serial numbers.

Using a VM doesn't prevent most side-channel attacks.  It only blocks
access to a description of your hardware.

 Then I heard that attackers can actually break out of VM's if they get root 
 access on it due to a successful attack.

It depends on the VM software you are using.


Robert Ransom


signature.asc
Description: PGP signature


Re: Me - Tor - VPN - Internet?

2010-10-07 Thread Robert Ransom
On Thu, 7 Oct 2010 23:58:28 -0400
grarpamp grarp...@gmail.com wrote:

  a free VPN
  There are VPN providers that will let you pay anonymously.
 
 Among others, I would be interested in reading posts
 containing lists of VPN providers that offer one or more
 of these two services. Thanks.

No -- put them on the Hidden Wiki.

Finding *that* is left as an exercise for the reader.


Robert Ransom


signature.asc
Description: PGP signature


Re: Torbutton 1.3.0-alpha: Community Edition!

2010-10-02 Thread Robert Ransom
On Sat, 02 Oct 2010 14:59:42 -0500
David Bennett dbennett...@gmail.com wrote:

 I haven't tried the new version yet,  is there a descriptive popup that
 explains what's happening when a user clicks a tor:// or tors:// ?

Yes.


Robert Ransom


signature.asc
Description: PGP signature


Re: beneficia versus maleficia

2010-10-02 Thread Robert Ransom
On Sat, 02 Oct 2010 15:58:15 -0500
David Bennett dbennett...@gmail.com wrote:

 I am facing a moral dilemma in regards to joining the tor proxy
 network.  I am hoping a discussion may alleviate some of my concerns.
 
 On the pro side we have a group of individuals whose intentions for
 using the technology are consistent with common values.  These include
 uses such as researching medical conditions and accessing/providing
 knowledge forbidden by an authoritarian presence.  On the con side, the
 technology can be used for diabolical purposes such as predatory and
 violent behavior (for example; pedophilia and bomb making).
 
 The technical challenges of discriminating between these uses are
 elusive at best.  One facebook session may be noble while another may be
 predaceous.  Although risk associated with enabling an individual to
 overcome obstacles in the quest for knowledge is acceptable to me, the
 thought of enabling a devious mind to harm other individuals is hard to
 swallow.

People who are already willing to commit crimes can already get
anonymity -- they can use unsecured wireless access points, they can
break into poorly secured computers on the Internet and relay their
traffic through those, they can steal phones to make anonymous phone
calls, they can send letters through the U.S. Postal Service
anonymously, etc..  Tor is for people who do not want to break the law
in order to keep advertisers
(http://online.wsj.com/article/SB10001424052748703294904575385532109190198.html)
and evil governments
(https://www.eff.org/deeplinks/2010/09/government-seeks,
https://www.eff.org/deeplinks/2010/08/open-letter-verizon, etc.) from
tracking what they read on the Internet.


 I'd like to hear other thoughts and comments about this.

Read https://www.torproject.org/faq-abuse.html.en.


Robert Ransom


signature.asc
Description: PGP signature


Re: BetterPrivacy - necessary?

2010-10-01 Thread Robert Ransom
On Fri, 01 Oct 2010 22:29:48 +0100
Matthew pump...@cotse.net wrote:

   IMHO its important to suppress active content (Flash, ActiveX,
  Silverlight, JavaScript etc.) and other junk and therefor I prefer
  'Privoxy' [1] instead of Polipo.

 I concur but doesn't TorButton do all this suppression?

Torbutton disables plugins (e.g. Java and Flash), and restricts the 
capabilities of
JavaScript code.


 That said: what was the rationale in moving from Privoxy to Polipo?  Did it 
 happen because TorButton became standard?

I think Polipo was a better cache, and since an HTTP proxy can't filter
evil content out of HTTPS responses, Privoxy's filtering was not very
useful.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-25 Thread Robert Ransom
On Sat, 25 Sep 2010 17:04:14 -0700
Mike Perry mikepe...@fscked.org wrote:

 Thus spake coderman (coder...@gmail.com):
 
  however, if an attacker has access to read this locally they've
  already compromised you to a degree that random mac affords no
  protection...
 
 Is this really true?

If you are running a hidden service, on a computer with no network
access except through Tor, no -- you might not be hosed just by an
attacker being able to run a shell command, but leaking an actual MAC
address from an actual NIC might get you tracked down.  (An attacker
with shell access can read your MAC address on Linux just by running
ifconfig, even as an ordinary user.)

  One of the things I've wondered about here is
 plugins, but since Torbutton disables them for other reasons I haven't
 really looked into it. For insance, I know Java can create a socket,
 and query the interface properties of that socket to get the interface
 IP. Why not mac address? And if not java, can one of flash,
 silverlight, pdf-javascript, or others do this? Already we have
 location features built in to the browser based on nearby Wifi MACs...
 
 The Java trick to get the interface IP does not require special privs,
 so a randomized MAC would in fact help this scenario, if it were
 somehow possible.

I don't know whether browser plugins can be used to read a MAC address,
but if *they* can run a shell command like ifconfig, yes, you are in
real trouble.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-24 Thread Robert Ransom
On Fri, 24 Sep 2010 17:34:05 -0400
hi...@safe-mail.net wrote:

 Robert Ransom:
 
  Also, if you haven't bothered to change your MAC address, an attacker
  with any UID can read it using ifconfig; your hardware manufacturers
  may have kept records of where the device(s) with that MAC address were
  shipped.
 
 I have heard of these attacks, like an attacker reading off your MAC 
 address and even hardware serial numbers. I should be safe regarding 
 serial numbers, but I am some concerned about the MAC address.
 
 It would be very nice to know how to change the MAC address so it says 
 something different when you run the ifconfig utility. Could you, or anyone, 
 please help me with that? I'm using Linux.

Use the macchanger utility.  Make sure you write down your original MAC
first, in case you need to switch back to it later.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-24 Thread Robert Ransom
On Mon, 20 Sep 2010 11:00:41 -0400
Gregory Maxwell gmaxw...@gmail.com wrote:

 On Fri, Sep 17, 2010 at 10:41 PM, Robert Ransom rransom.8...@gmail.com 
 wrote:
  If your hidden service really needs to be annoying to find, run it:
 
  * using only well-written, secure software,
  * in a VM with no access to physical network hardware,
  * on a (physical) computer with no non-hidden services of any kind
   running on it (so that an attacker can't use Dr. Murdoch's ‘Hot or
   Not’ clock-skew detection attack),
  * and over a fast enough Internet connection that the adversary cannot
   easily determine your connection's speed.
 
 I think you've missed some points.
 
 * The (Virtual) machine running the hidden service should probably
 also have no _outbound_ network connectivity except via tor.
 
 This is because it can be even easier to trick a software on a server
 into making a network connection than it is to remotely compromise the
 server. E.g. your GNU/Linux distribution may have installed some extra
 CGIs in your webserver that you are unaware of...

Yes.  I knew that, and forgot to mention it (at least in that list).

These defenses, and the attacks they are intended to block, need to be
written up in a (hidden?) wiki article, so people setting up sensitive
hidden services can read all of them in one place.

 And here is a potentially controversial suggestion, lets see what
 others say about it:
 
 * You should run your hidden service behind tor bridges rather than
 directly connecting to the tor network.
 
 The rationale for this suggestion is that it may make it more
 difficult for a network observer to enumerate a list of tor clients in
 order to apply things like the clock-skew attack or subject them to
 additional network surveillance.

No.  An attacker *will* find your entry guards (see
http://freehaven.net/anonbib/date.html#hs-attack06); you want them to
have as many clients as possible, so that you still have some chance of
getting lost in the crowd.


  The above precautions are probably enough, unless a three-letter agency
  (or four-letter association) knows about your hidden service and wants
  to find and ‘neutralize’ its operator.  In that case, you have to worry
  about the near-global passive adversary and other threats that Tor
  can't afford to defeat.
 
 I fear that you're overstating the security provided.
 
 For example, I think that if you managed to piss off the ISP community
 vigilantes that go after spammers and botnets that they would have a
 decent chance of tracking you down in spite of your efforts to stay
 hidden.

Probably.  The first time I read the Murdoch-Zieliński paper
http://freehaven.net/anonbib/date.html#murdoch-pet2007, I didn't
notice that someone was actually planning to use the sFlow data to
locate spammers.  


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-20 Thread Robert Ransom
On Sun, 19 Sep 2010 07:11:21 -0400
hi...@safe-mail.net wrote:

 Robert Ransom:
 
  The VM is optional *if* and *only if* an attacker cannot possibly get
  root on your hidden service.
 
 How do external attackers get root access on a Linux system, and how do they 
 then communicate with the system as root, like listing directories and 
 changing configuration files as you would have done in a shell, when they're 
 basically limited to a hidden website with the browsers address bar and 
 maybe a few input forms? It gets more sensible when we're talking about 
 default and open websites with the server's true IP addresses and ports out 
 in the public, and exploitation of SSH servers. I'm just curious about that.

If your web server and all of the interpreters and programs it runs are
competently written, there is no way for an attacker to get root
access, or even run a shell command.  Web applications and the
special-purpose interpreters they run on are often incompetently
written.

 BTW how do you reply to specific posts? All I'm doing here is replying to 
 my own original post. Thanks.

I select the message I want to reply to, and then I click the “Reply”
button in my mail client's toolbar.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-20 Thread Robert Ransom
On Mon, 20 Sep 2010 09:58:14 -0400
hi...@safe-mail.net wrote:

 Robert Ransom:
 
  If your web server and all of the interpreters and programs it runs are
  competently written, there is no way for an attacker to get root
  access, or even run a shell command.  Web applications and the
  special-purpose interpreters they run on are often incompetently
  written.
 
 I've noticed that on most Linux distributions, Apache 2 (just an example) 
 runs as a non-privileged user on the system. Though one Apache 2 process 
 does run as Root, but it spawns unprivileged process children. So if it 
 was to be a flaw in Apache 2, or PHP, that an attacker knew about, would he 
 then be able to gain Root access if the software runs as a non-Root user?

Maybe.  Most Linux distributions do not put much effort into protecting
a system against a malicious user with shell access.  Even if you have
no local privilege-escalation holes, there are usually scary
side-channel attacks (e.g. cache-related leakage of AES keys), and you
may have already given the compromised UID permission to send arbitrary
network packets (if it can run VirtualBox, for example, the attacker
can set up a VM with a bridged network device, log in as root in the
VM, and send evil packets at will).

Also, if you haven't bothered to change your MAC address, an attacker
with any UID can read it using ifconfig; your hardware manufacturers
may have kept records of where the device(s) with that MAC address were
shipped.

  I select the message I want to reply to, and then I click the âReplyâ
  button in my mail client's toolbar.
 
 The same as I do. It must be my mail provider that sucks. :)

If you have a Linux system with persistent storage, try Claws Mail.  If
you have a Windows system, gpg4win includes Claws Mail for Windows.
(Unfortunately, it leaks its version number, your GTK version number,
and its build target (including processor architecture) in an X-Mailer
header.)


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-17 Thread Robert Ransom
On Fri, 17 Sep 2010 16:36:16 -0400
hi...@safe-mail.net wrote:

 Robert Ransom:
 
  Only if you trust the hardware firewall/router. I wouldn't.
 
 Okay so there aren't that many safe options to run a hidden service really, 
 if any at all?

If your hidden service really needs to be annoying to find, run it:

* using only well-written, secure software,
* in a VM with no access to physical network hardware,
* on a (physical) computer with no non-hidden services of any kind
  running on it (so that an attacker can't use Dr. Murdoch's ‘Hot or
  Not’ clock-skew detection attack),
* and over a fast enough Internet connection that the adversary cannot
  easily determine your connection's speed.


The VM is optional *if* and *only if* an attacker cannot possibly get
root on your hidden service.  The physical computer with no non-hidden
services on it, and the fast Internet connection, are optional if you
do not need to keep your service hidden at all.

Using secure software to run your hidden service is absolutely
essential; if an attacker can get a list of files
in /bin, /usr/bin, /usr/local/bin, /sbin, /usr/sbin, /usr/local/sbin,
and /command, and a list of directories in /usr/local and /opt, he
probably knows enough to identify the service's owner, and more
importantly, he knows enough to recognize another service owned by the
same person.  Your preferred Unix distribution, your favorite editors,
your favorite command-line utilities, etc. are not especially easy to
hide.  (For example, if you find a hidden service running Plan 9 or
Inferno, or with 9base or plan9port installed on it, you're going to
look at me first -- I'm on both the Tor mailing lists and
Plan-9-related mailing lists, and I don't think anyone else is at the
moment.)


The above precautions are probably enough, unless a three-letter agency
(or four-letter association) knows about your hidden service and wants
to find and ‘neutralize’ its operator.  In that case, you have to worry
about the near-global passive adversary and other threats that Tor
can't afford to defeat.


Another, safer, option is to keep your hidden service below the radar
entirely -- it's a lot harder for your adversaries to find something if
they don't know it exists.  I assume that's the approach that the US
Navy uses.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-16 Thread Robert Ransom
On Thu, 16 Sep 2010 15:32:21 -0400
hi...@safe-mail.net wrote:

 Do you say that Ethernet cards may have backdoors built in,

Yes.  I read a report years ago that at least one model of Ethernet
card had a remote ‘firmware upgrade’ ‘feature’ built in, with
absolutely no authentication of the new firmware blob.  The card
firmware had access to the host's DMA hardware, which can be used to
root the host.

 or did I 
 misunderstand that?

No.


 What if you put a hardware firewall router between the first computer and 
 the second:
 
 [Server box with web server] - [Hardware firewall router] - [Gateway box 
 with Tor] - Internet/Tor entry node
 
 And computer 1 and computer 2 operate on two different IP ranges, while 
 the firewall router sets all the firewall directives between them.
 
 Could this be safer?

Only if you trust the hardware firewall/router.  I wouldn't.


 (I'm not sure if this message came within the thread, since I'm not yet sure 
 about how to reply like that.)

It did.


Robert Ransom


signature.asc
Description: PGP signature


Re: The best way to run a hidden service: one or two computers?

2010-09-13 Thread Robert Ransom
On Mon, 13 Sep 2010 14:12:35 -0400
hi...@safe-mail.net wrote:

 When running a hidden service, obviously hidden so no one can find the 
 true source and IP of the web server because lives may be depended on 
 that, I've heard that the best and safest way is to use a dedicated 
 server computer with two operating systems and the server being inside a 
 virtual machine. So if the web server should get cracked, the cracker 
 will be locked inside the virtual machine and cannot do side-channel 
 attacks or any other clever methods to reveal the true source.
 
 Then I read somewhere that theres even a more secure way, and that is by 
 using two dedicated computers. One computer with the web server running,
 
 being connected with a LAN cable to the second computer which works as a 
 firewalled router with Tor running on it with the hidden service keys. 
 Again, if a cracker cracks the server machine, he will be physically 
 trapped inside the server and cannot access the second computer nor the 
 internet directly.

He *would* be able to access the Ethernet card in the
Internet-connected gateway box, and I have seen reports of at least one
Ethernet card with an unauthenticated remote-update backdoor which
could be used to take over the entire computer through DMA.  At the
very least, virtual network adapters are unlikely to have intentional
backdoors hidden in them.

 What are your opinions on this?
 What should be done and what should be avoided while setting up such 
 systems?

* First, operate the hidden service using software with no security
  holes, and on a (physical) computer that does not operate any
  Internet-visible services (especially not a Tor relay).  Putting your
  hidden service in a virtual machine won't protect you from the
  side-channel attack described in “Hot or Not”.

* Second, if you must use software with security holes to operate your
  hidden service, keep that software in a virtual machine, and do not
  let it communicate with a real network adapter.  (The ‘host-only
  network’ option in VirtualBox should be safe enough, for example.)  I
  don't see a big reason to run Tor in a VM, unless you need to set up
  transparent proxying and don't want to mess up your main OS
  installation.


Robert Ransom


signature.asc
Description: PGP signature


Re: When is the 'MyFamily' setting unnecessary?

2010-09-12 Thread Robert Ransom
On Sun, 12 Sep 2010 20:28:33 -0400
Gregory Maxwell gmaxw...@gmail.com wrote:

 Has anyone previously suggested using a shared secret for family
 configuration?  The protocol might look something like this:
 
 The user configures a secret per family which the node is a member of.
 For each family the secret is processed with key strengthening (such
 as PBKDF2 or, better, scrypt) and a (say) 64-bit family ID and a
 128-bit family-key are derived.  Nodes publish the family IDs.  Upon
 discovering a new node with a common family ID the node contacts the
 matching node and uses the non-advertised family key in a handshake
 (this could be a zero knowledge protocol like socialist millionaires
 or just encrypting a concatenation of nodeIDs and nonces) to prove
 that the key is shared.  After proving the secret is really shared the
 nodes store the results and update their family advertisements.
 
 This would simplify family configuration down to setting a single
 common secret string per family but wouldn't create any change in
 behaviour for non-family nodes and could also exist side by side with
 the old mechanism.

That's the wrong approach.  The config file should contain a random
secret key shared among all relays in a family, and the relays should
publish in their descriptors a public key derived from that secret key
along with a signature of the relay's current signing key with that
secret key.  With DJB's Curve25519 elliptic-curve parameters, the
public key can take only 511 bits, and the signature can take only 506
bits.  A smaller curve could fit the public key into 319 bits and the
signature into about 320 bits (the precise size would be determined by
the group order).

This would not be backward-compatible with existing clients, but it
avoids the current quadratic blowup in both the config files and the
total descriptor size.


Robert Ransom


signature.asc
Description: PGP signature


Re: When is the 'MyFamily' setting unnecessary?

2010-09-12 Thread Robert Ransom
On Sun, 12 Sep 2010 23:36:30 -0400
Gregory Maxwell gmaxw...@gmail.com wrote:

 On Sun, Sep 12, 2010 at 9:40 PM, Robert Ransom rransom.8...@gmail.com wrote:
  That's the wrong approach.  The config file should contain a random
  secret key shared among all relays in a family, and the relays should
  publish in their descriptors a public key derived from that secret key
  along with a signature of the relay's current signing key with that
  secret key.  With DJB's Curve25519 elliptic-curve parameters, the
  public key can take only 511 bits, and the signature can take only 506
  bits.  A smaller curve could fit the public key into 319 bits and the
  signature into about 320 bits (the precise size would be determined by
  the group order).
 
  This would not be backward-compatible with existing clients, but it
  avoids the current quadratic blowup in both the config files and the
  total descriptor size.
 
 There we go—
 
 Perhaps the signature could be shipped only to the directory
 authorities but left out of the published descriptors, no?

No, the client needs to see it in the relay/bridge descriptor.

(obviously
 they'd need to be left outside of the part signed by the nodes, so
 obviously some reworking is required there).

???

Why?

  Directories would ignore
 nodes that claim families that they can't back up with a valid
 signature. This would open up some attacks by a conspiracy of evil
 directories but it would be detectable and no worse than other kinds
 of attacks available to similarly compromised directories.

I don't see how it could open up any *new* attacks -- the directory
authorities can already ignore relays, or mark them as Invalid, with
near impunity.

 With the signatures left out of the descriptor and 511 bit keys the
 break-even point for descriptor size is four nodes in a family.  A
 very quick check with my cached descriptor data locally suggests that
 this would reduce the aggregate descriptor size significantly compared
 to the current scheme.  (there are enough families with _many_ nodes,
 to offset the fact that most families are small)

Don't forget that the keys and signatures would need to be represented
in ASCII in the descriptors.  If you're willing to break backward
compatibility anyway, there is some room for squeezing the existing
family specifications down, as well (i.e. represent node identity key
fingerprints in base64, or even base85 (only the clients should care
about it, and they can probably eat the performance cost)).

Also, don't forget that we can use an elliptic curve modulo a 159-bit
prime for this -- node family keys are relatively low-value
authentication keys, and since they would only be used to sign nodes'
ephemeral *signing* keys, they can be changed with rather little trouble.


Robert Ransom


signature.asc
Description: PGP signature


Re: When is the 'MyFamily' setting unnecessary?

2010-09-12 Thread Robert Ransom
On Mon, 13 Sep 2010 00:26:02 -0400
Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Sep 13, 2010 at 12:11 AM, Robert Ransom rransom.8...@gmail.com 
 wrote:
  There we go—
  Perhaps the signature could be shipped only to the directory
  authorities but left out of the published descriptors, no?
  No, the client needs to see it in the relay/bridge descriptor.
  they'd need to be left outside of the part signed by the nodes, so
  obviously some reworking is required there).
  Why?
 
 The client needs to see the public key for sure, since thats
 effectively a family ID. Does it need to see the signature if instead
 it trusts the bridges to have validated the signatures and correctly

s/bridges/authorities/, I assume.

 ignored/invalidated only and all the nodes with invalid signatures?
 
 If that was workable it would halve the amount of advertised data required.

It's better to not rely on any trusted third party any more than we
absolutely have to.  For this, we don't need a TTP at all, so we
shouldn't rely on one.

  Don't forget that the keys and signatures would need to be represented
  in ASCII in the descriptors.  If you're willing to break backward
  compatibility anyway, there is some room for squeezing the existing
  family specifications down, as well (i.e. represent node identity key
  fingerprints in base64, or even base85 (only the clients should care
  about it, and they can probably eat the performance cost)).
 
 I was assuming hex, like the current families. 512/160=3.2  Obviously
 base16 would do even better... With smaller ecc and base85 it would
 be rather close in size to the existing fingerprints. (assuming the
 signature was omitted)

OK.

The signature system I had in mind was essentially the system in §4 of
http://www.cs.umd.edu/~jkatz/papers/dh-sigs-full.pdf (the system
proved at least as hard as DDH), with an added space optimization
(mainly, compute h as a hash of y1, and publish only y1 and y2).  On a
curve modulo a 159-bit prime, a signature and its public key fit in a
total of about 640 bits.  The only system I know of with a shorter
signature is the Boneh-Lynn-Shacham pairing-based scheme, with 160-bit
signatures and a 512-bit public key, and in this application that's not
a space improvement (total size: 672 bits per descriptor).

  Also, don't forget that we can use an elliptic curve modulo a 159-bit
  prime for this -- node family keys are relatively low-value
  authentication keys, and since they would only be used to sign nodes'
  ephemeral *signing* keys, they can be changed with rather little trouble.
 
 Agreed, that they can be small. Though changing them would require
 per-node configuration. They ought to at least be strong enough to
 discourage mischief, though 159-bit is still harder than anything that
 I'm aware of being cracked and would probably leave guessing the
 secret as the low hanging fruit.

I think Dr. Bernstein is currently attacking a curve of about 130-bit
order.  Even using that curve for this purpose would discourage
mischief: it's still quite hard to find the secret key, and even if you
do find it, it's not very useful, or for very long.  As I said, family
secret keys would be low-value authentication keys, and it is easy to
make a compromised family key useless (just stop using it).

Unless the operator does something *really* dumb like use an easily
guessed character string as his secret (and we can make that difficult
by requiring that it be specified as exactly N base64 characters,
possibly with a checksum prepended by whatever tool we provide to
generate family keys), it's much faster to use an attack based on the
group structure.  There are elliptic-curve groups for which the only
known algorithms to solve the discrete-logarithm problem are
group-generic (i.e., they work on any cyclic group), and the
group-generic methods take time proportional to the square root of the
group order.  Brute-force guessing takes time proportional to the group
order itself.


Robert Ransom


signature.asc
Description: PGP signature


Re: Google and Tor.

2010-08-25 Thread Robert Ransom
On Wed, 25 Aug 2010 20:04:01 -0700
Mike Perry mikepe...@fscked.org wrote:

 I also question Google's threat model on this feature. Sure, they want
 to stop people from programmatically re-selling Google results without
 an API key in general, but there is A) no way people will be reselling
 Tor-level latency results, B) no way they can really expect determined
 competitors not to do competitive analysis of results using private IP
 ranges large enough to avoid DoS detection, C) no way that the total
 computational cost of the queries coming from Tor can justify denying
 so many users easy access to their site.

If Tor exit nodes were allowed to bypass Google's CAPTCHA, someone
could put up a low-bandwidth Tor exit node and then send their own
automated queries directly to Google from their Tor exit's IP.


Robert Ransom


signature.asc
Description: PGP signature


Re: polipo

2010-08-20 Thread Robert Ransom
On Sat, 21 Aug 2010 09:39:08 +0800
Trystero Lot lo...@callout.me wrote:

 still the same. i uncommented and added user-agent
 
 censoredHeaders = set-cookie, cookie, cookie2, from,accept-language, 
 user-agent
 censorReferer = true
 
 my header is not clean and in fact shows my OS :(
 
 tested using..
 https://anonymous-proxy-servers.net/en/anontest

As I understand it, Polipo can't scrub the headers of an HTTPS request,
even if you use it as an HTTPS proxy.


Robert Ransom


signature.asc
Description: PGP signature


Re: Tor Project 2008 Tax Return Now Online

2010-08-17 Thread Robert Ransom
On Tue, 17 Aug 2010 09:05:27 -0700
Julie C ju...@h-ck.ca wrote:

 But from an organizational, big picture view, I think it is clearly time for
 them to bring in some evangelical fundraisers to move the Project forward.
 There is a great base to build on. There is a great story to tell. But think
 about it this way - how far is the Project going to go, how successful will
 it be, with the inspirational leaders spending most of their time fixing
 bugs, doing commits, living in the code, and such.

What do you expect the Tor Project to do with zillions of dollars?
Using donated funds to operate more relays, bridges, and exit nodes
won't help much -- Tor nodes need to be dispersed among as many
different operators and ISPs as possible.  Using donated funds to
improve the Tor software is a good thing, but there is a limit to how
much money can be thrown at that -- Tor developers must be competent
programmers, and must understand the Tor software and protocol well.

Also, remember that Tor's opponents would put much more effort into
blocking Tor if it were heavily promoted in the Western media.  (China
and Iran are not Tor's only opponents -- here in the US, misguided
politicians want to criminalize operating a Tor relay (see S. 436
http://thomas.loc.gov/cgi-bin/query/z?c111:S.436.IS:).)


 Also if you are challenging me to speak up, well here I am, and here I will
 continue to be. Personally I am also looking at what part of the Tor
 software I can work on myself as part of my upcoming thesis term at school
 ...

What are you studying?  Perhaps we can help you find a way to work on
Tor.


Robert Ransom


signature.asc
Description: PGP signature


Re: Tor Project 2008 Tax Return Now Online

2010-08-17 Thread Robert Ransom
On Tue, 17 Aug 2010 12:09:29 -0700
Robert Ransom rransom.8...@gmail.com wrote:

 Also, remember that Tor's opponents would put much more effort into
 blocking Tor if it were heavily promoted in the Western media.  (China
 and Iran are not Tor's only opponents -- here in the US, misguided
 politicians want to criminalize operating a Tor relay (see S. 436
 http://thomas.loc.gov/cgi-bin/query/z?c111:S.436.IS:).)

Oops -- I just re-read the bill, and it's somewhat less broad than I
thought when I first saw it.  It still seems to criminalize running a
Tor relay with a directory mirror, or running a Tor relay without full
logging, or running a Tor relay at all if you also run a web server or
provide an Internet mail-like service.


Robert Ransom


signature.asc
Description: PGP signature


Re: DuckDuckGo now operates a Tor exit enclave

2010-08-15 Thread Robert Ransom
On Sun, 15 Aug 2010 17:40:16 +0200
Michael Scheinost mich...@scheinost.org wrote:

 Hi all,
 
 thanks a lot for your answers.
 I did some additional reading and now have a vague idea how tor exit
 enclaving works.
 As far as I understand, enclaving doesn't break tor anonymity and
 privacy. Quite contrary to this, anonymity may be even enhanced by it
 (https://trac.torproject.org/projects/tor/wiki/TheOnionRouter/TorFAQ#WhatisExitEnclaving).
 
 On the other hand, there are still some points coming up with the post
 of Eugen that remain unclear to me:
 
 1. Eugen is posting this text from
 http://www.gabrielweinberg.com/blog/2010/08/duckduckgo-now-operates-a-tor-exit-enclave.html
 without any comment to this mailinglist. This blog enrtry looks alot
 like an adveritsment to me. Eugens intentions are hidden. So perhaps he
 is connected to duckduckgo.com in some way or perhaps he is not.

I don't know whether Eugen Leitl is connected to DuckDuckGo, but he has
routinely posted/forwarded Tor-related news stories to the mailing
list.  Search for his name in the archives at
http://archives.seul.org/or/talk/.

As for whether the blog post is an advertisement, Gabriel Weinberg
created, owns, and operates DuckDuckGo, and readers of his blog are
presumably interested in his business ventures and already aware of
DuckDuckGo.

 2. Why is it offering HTTP
 If duckduckgo.com really cares for the anonymity and privacy of its
 users, why do they offer unencrypted HTTP?

From a comment posted by ‘phobos’ (Andrew Lewman) on
https://blog.torproject.org/blog/life-without-ca:

| The reason we as tor allow http and do not automatically redirect to
| https is that some companies and countries block ssl websites by
| default. I've seen this in action at a few banks around the world. They
| feel they need to surveil their employees to meet audit requirements.
| If we automatically redirected to the ssl site, many people would be
| sad. Some countries in the Middle East block ssl versions of sites, but
| not the non-SSL version. Simply forcing SSL everywhere is fraught with
| complexities. However, enabling SSL for users to choose is a fine
| option. You'll notice my links were to the ssl version of a site if it
| existed.

DuckDuckGo probably allows non-SSL access for the same reasons.

Also, they would need to have an HTTP service that redirects to their
HTTPS URL in order to support users typing ‘duckduckgo.com’ into a
browser without a URL scheme, such a redirect can't be sent before the
browser has sent the request (and URL) in the clear, and once the user
has sent a request in the clear, sending the response back in the clear
doesn't hurt their privacy any further.

 Even if tor users are encouraged to use HTTPS, some of them will forget
 doing so.

https://www.eff.org/https-everywhere/

But it wouldn't be needed *if* you could ensure that you are using the
exit enclave.

 3. This site requires JavaScript.
 In my opinion this point is the worst: When I entered
 https://duckduckgo.com with NoScript enabled (my default) I can read the
 message This site requires JavaScript. just below the search box. So
 duckduckgo.com wants its user to turn on java script. But with java
 script enabled your anonymity is nearly switched off.

It looks like they mainly use JavaScript to load search results lazily
(when the user scrolls down so that the end of the page is visible).
Their FAQ (https://duckduckgo.com/faq.html) says that they are
actively working on a non-JavaScript version.  I hope they finish it
soon; their site wedged my browser the first time I tried it.

For now, Torbutton can block many of the scary JavaScript-based attacks
while still allowing JavaScript to run.

 Perhaps duckduckgo.com's primary intention is not offering anonymous
 services. Probably they just want to offer another alternate search
 engine. And perhaps they just think offering a tor enclave is a nice
 addon. So perhaps in conclusion, they didn't think much about anonymity
 and privacy. I don't know it.

https://duckduckgo.com/privacy.html

 But why was this ad posted to the tor mailinglist?

I don't know why Gabriel Weinberg didn't post a link to his blog post
to the list himself.  Advertisement or not, it is certainly an
appropriate news item for this list.


Robert Ransom


signature.asc
Description: PGP signature


Re: DuckDuckGo now operates a Tor exit enclave

2010-08-14 Thread Robert Ransom
On Sat, 14 Aug 2010 16:09:18 +0100
Geoff Down geoffd...@fastmail.net wrote:

 On Sat, 14 Aug 2010 09:20 -0400, Ted Smith ted...@gmail.com wrote:
 
  An exit enclave is when a service operates a Tor exit node with an
  exit policy permitting exiting to that service. Tor will automagically
  extend circuits built to that host from three hops to four, such that
  your traffic will exit on localhost of the service you are intending to
  use. This means that users will use DDG's node when building circuits
  that terminate at duckduckgo.com or whatever.
  
 Really? Duckduckgo.com is on AS19262 Verizon, but when I accessed it, it
 was via an exit node on AS30058 ACTIVO-SYSTEMS.

I don't remember where I read this, but at the moment, exit enclaving
only works if your Tor client has already downloaded and cached the
relay descriptor for the destination host.


Robert Ransom


signature.asc
Description: PGP signature


Re: Restricted Exit Policy Port Suggestions?

2010-08-12 Thread Robert Ransom
On Wed, 11 Aug 2010 03:05:24 -0700
Mike Perry mikepe...@fscked.org wrote:

 It's become clear that it is almost impossible to run an exit node
 with the default exit policy in the USA, due to bittorrent DMCA abuse
 spambots. I believe this means that we should try to come up with one
 or more standard, reduced exit policy sets that allow use of the
 majority of popular internet services without attracting bittorrent
 users and associated spam.
 
 Using previous threads, I have an initial sketch of such a policy at:
 https://blog.torproject.org/blog/tips-running-exit-node-minimal-harassment
 
 It includes the following ports: 20-22, 53, 79-81, 110, 143, 443, 465,
 563, 587, 706, 873, 993, 995, 1863, 5190, 5050, 5222, 5223, 8008,
 8080, .
 
 While looking over the Vidalia settings, I just noticed that IRC is
 missing from this list: , 6667, 6697. 
 
 However, IRC is also a common source of abuse and DDoS attacks, and is
 often forbidden by ISP AUP. Because of this, I was thinking we should
 probably define 3 or 4 levels of Exit Policy:
 
 1. Low Abuse (above list, possibly minus 465, 587 and 563)
 2. Medium Abuse (above list, plus IRC)
 3. High Abuse (default exit policy)
 
 Now the question is, what other ports should we add or subtract from
 this list?

I just looked through the IANA-registration-based services file from
iana-etc 2.30 (http://sethwklein.net/iana-etc/ as installed
to /etc/services on Arch Linux).  Here are my recommendations:


Add:

* 70 (Gopher)
* 504 (Citadel (a BBS; see http://citadel.org/))
* 553 (PIRP (see http://cr.yp.to/proto/pirp.txt)
* 564 (9P (related to Plan 9; documented at multiple sites))
* 1649 (IANA-registered Kermit port)
* 2401 (CVS pserver)
* 2628 (DICT (see http://www.dict.org/ and/or IETF RFC 2229))
* 3690 (Subversion)
* 4155 (bzr version control system)
* 4349 (fsportmap (related to Plan 9))
* 4691 (Monotone version control system)
* 5999 (CVSup)
* 6121 (SPDY)
* 9418 (Git)
* 11371 (HKP (“OpenPGP HTTP Keyserver”))


Gopher and Kermit are still in use; Citadel is in use, and the protocol
used on port 504 appears to support TLS.  PIRP may or may not be in
use, but I do not expect abuse complaints related to it.  9P is useful
over the Internet, and the Plan 9 ports are unlikely to be exposed to
the Internet (or accessed!) unintentionally or by technically clueless
users for the foreseeable future, so they should not result in abuse
complaints.  CVSup can be used to upgrade FreeBSD to a -CURRENT
system.  The rest of the ports listed above need no further explanation.


Other ports to consider:

* 194 (IANA-registered IRC port)
* 994 (IANA-registered IRC-SSL port)
* 1080 (IANA-registered SOCKS port)
* 1789 (in IANA services file, registered to DJB; described only as
  “hello”; possibly useful for testing connectivity to a
  soon-to-be-public server)
* 5191..5193 (other AOL ports; 5190 is already listed)
* 5556 (FreeCiv (turn-based game))
* 5688 (GGZ Gaming Zone (probably low-data-rate, although the protocol
  is probably not useful over Tor and should be checked for unwanted
  information disclosure))
* 6665 (in IANA services file; described only as “IRCU”)
* ..6673 (not listed in IANA services file, but used unofficially
  by the Inferno VM; overlaps with customary IRC ports; no ports in
  this range are listed as used by file-sharing programs)
* 8074 (Gadu-Gadu)
* 8990..8991 (in IANA services file; described as “webmail HTTP(S)
  service”)


I don't expect these ports to cause much trouble for the Tor exit node
(except possibly the IRC ports).  Port 1080 can be used to reach
BitTorrent or other rude services, but that's a little trickier for the
client to set up than Tor alone, and it is less likely to result in
DMCA complaints sent to the Tor exit operator (although the SOCKS
server operator may complain).


Robert Ransom


signature.asc
Description: PGP signature


Re: Restricted Exit Policy Port Suggestions?

2010-08-12 Thread Robert Ransom
On Wed, 11 Aug 2010 08:44:38 -0400
and...@torproject.org wrote:

 On Wed, Aug 11, 2010 at 03:05:24AM -0700, mikepe...@fscked.org wrote 1.8K 
 bytes in 55 lines about:
 : It's become clear that it is almost impossible to run an exit node
 : with the default exit policy in the USA, due to bittorrent DMCA abuse
 : spambots. I believe this means that we should try to come up with one
 : or more standard, reduced exit policy sets that allow use of the
 : majority of popular internet services without attracting bittorrent
 : users and associated spam.
 
 Giving in to the automated accusations of DMCA violations is a sad
 statement on the contemporary Internet.  It seems the chilling effects
 of the DMCA are so palpable, no one wants to fight back any more, not
 users and not ISPs. See http://chillingeffects.org/ for more analysis
 and options on how to respond. Are there no ISPs/datacenters left in the
 USA willing to defend the First Amendment of the US Constitution and the
 user's legal protections under patent/trademark/copyright laws?

What you need is a federal prosecutor willing to put the DMCA-abuse
spammers behind bars for a zillion counts of perjury.  The threat of
the EFF sponsoring an occasional lawsuit over a blatantly false
accusation won't deter them; the spammers operate as ‘independent’
corporations with no real assets in their names, and if one shell
company gets zapped in civil court, they'll close it and start two new
ones running the same software the next day.  The threat of being sent
to prison for the next 2000 years might make those scum turn off their
spambots and go ooze back to wherever they came from.


Robert Ransom


signature.asc
Description: PGP signature


Re: Padding again Was: Practical web-site-specific traffic analyses

2010-08-01 Thread Robert Ransom
On Sun, 1 Aug 2010 23:02:53 -0400
Gregory Maxwell gmaxw...@gmail.com wrote:

 On Sun, Aug 1, 2010 at 9:07 PM, Steven J. Murdoch
 tortalk+steven.murd...@cl.cam.ac.uk wrote:
 [snip]
  To fix this attack, systems can add dummy traffic (padding), delay
  packets, and/or drop packets. Tor adds a bit of padding, but unlikely
  enough to make a difference. Tor doesn't (intentionally) drop or delay
  traffic.
 
  More research is needed before we will know how to best to use and
  combine these traffic analysis resistance techniques. I co-authored a
  paper on some aspects of this problem, but while the combination of
  delaying and padding is promising, more needs to be done before this
  can be deployed in a production system:
 
   http://www.cl.cam.ac.uk/~sjm217/papers/pets10topology.pdf
 
 The overhead of padding schemes that I've seen, either end to end
 type, or hop-based for free routed networks as presented above, are
 simply too large to be practical.
 
 I'd also guess that there might also be a negative social effect where
 people would be less inclined to run relays if they knew that only a
 small fraction of the traffic was actually good-put.
 
 I think this makes a good argument for combining tor with a
 high-latency higher anonymity service— so that the padding for the
 most timing attack vulnerable traffic can still be good traffic
 sourced from high latency mixes stored at nodes. ... but this wouldn't
 be simply accomplished, and I'm not aware of any ongoing research
 along these lines.

Assuming the user can't just make his Tor node a relay and wait for
other random people to start stuffing their data through it, I would
suggest the following padding strategy:

* Limit the Tor client's download bandwidth to about 10 kB/s or less,
  to reduce the amount of padding needed.

* Limit the Tor client to one TLS connection, so that all incoming
  traffic is roughly indistinguishable to the attacker we're
  considering (a passive eavesdropper on the user's link to the Tor
  network).

* If possible, introduce delays into outgoing non-RELAY_SENDME cells to
  mask keystroke timing.

* To pad your connection, download a large, useful file through Tor in
  the background.

A higher-latency anonymity service within Tor would be a Good Thing,
but we doesn't seem to have one at the moment, and it's probably not
necessary to block this attack.

Robert Ransom


signature.asc
Description: PGP signature