Re: How to Run High Capacity Tor Relays
I should have said this in my first post, but I believe that all subsequent replies should go to tor-relays. This should be the last post discussing technical details of relay operation on or-talk. Thus spake coderman (coder...@gmail.com): net.ipv4.tcp_keepalive_time = 1200 ^- who uses keepalive? :) Hrmm, Tor does its own application-level keepalive. Perhaps that's how this got merged in by confusion. Or maybe, like many of these, it was just a blanket cut+and+paste move out of desperation to try to increase capacity. The whole superset of voodoo thing. net.netfilter.nf_conntrack_tcp_timeout_established=7200 net.netfilter.nf_conntrack_checksum=0 net.netfilter.nf_conntrack_max=131072 net.netfilter.nf_conntrack_tcp_timeout_syn_sent=15 ^- best to just disable conntrack altogether if you can. -J NOTRACK in the raw table as appropriate. you're going to each up lots of memory with a decent nf|ip_conntrack_max ( check /proc/sys/net/ipv4/netfilter/ip_conntrack_max , etc ) Will this remove the ability to do PREROUTING DNAT rules? I know a lot of Tor nodes forward ports and even IPs around. Good suggestion though. Perhaps we should mention both options in the final draft. [...] some dupes in here? net.ipv4.ip_forward=1 ... net.ipv4.conf.default.forwarding=1 net.ipv4.conf.default.proxy_arp = 1 ^- BAD! this should not be enabled by default unless you're actually routing specifically to guest vm's or between interfaces or something. if you enable forwarding by default, someone may use you to relay some malicious traffic. Oh shit, that is a relic of Mortiz's config. He is also planning to provide VPN and VPS services. Good catch. Also, does DNAT count as forwarding for the ip_forward option? == Did I leave anything out? == Well, did I? i'd love to see an sca6000 accelerated node. been working with these recently but unfortunately they're allocated for other work... (most of the other crypto hw is going to be bus / implementation limited to less than what a beefy 64bit modern server can provide, so of little utility in this context.) I'd love to hear Roger and Nick's comments on this, but isn't it possible this might also bottleneck well before 1Gbit? I am worried it may depend largely on the architecture of the card and our use of openssl. Their docs claim up to 1Gbit but this could be using highly parallelized processing, which tor cannot really do, as I understand it. Personally I think the hyperthreading option is the lowest hanging fruit for maxing out a single Tor relay process for lowest cost. Also, afaik, zero people in the wild are actively running Tor with any crypto accelerator. May be a very painful process... I'm not really interested in documenting it unless its proven to scale by actual use. I want this document to end up with tested and reproduced results only. You know, Science. Not computerscience ;) -- Mike Perry Mad Computer Scientist fscked.org evil labs pgpUMXxamWLCJ.pgp Description: PGP signature
Re: The team of PayPal is a band of pigs and cads!
Thus spake David Carlson (carlson...@sbcglobal.net): On 8/23/2010 2:05 PM, Andrew Lewman wrote: They are correct, https://cms.paypal.com/us/cgi-bin/?cmd=_render-contentcontent_ID=ua/UserAgreement_fulllocale.x=en_US Section 9.1, j. Apparently they don't want you as a customer if you want to protect yourself from unscrupulous marketing or local ISP surveillance. I'll start a conversation with them. Thanks for bringing this up. I am a newbie here. Since they use SSL, isn't it overkill to route your connection through Tor? I know it is a pain to switch Tor on and off when multitasking, but it would seem that Tor button could do that. There have already been lots of excellent paypal-specific answers here, but the more general problem is that any company on the web is, or at least eventually will conider, making money of of your information, any way they can. Using the Firefox addon RequestPolicy really makes you aware of this. For example, I've seen facebook domains sourced in my airline ticket purchase windows. When I happen to be wearing my paranoid hat (pretty often -- it's rather stylish), I am convinced this is facebooks' way of getting to ground-truth on an identity they have a profile for, because plane ticket use is strongly authenticated. I'm sure the same thing can happen with any bank, or even online merchant. Once they get a purchase with a cookie set and IP, they not only know your location, but they can correlate it with the rest of the marketing data they have on you, and if you dont clear your use Tor+Torbutton, they can infer a list of the rest of the websites you visit. In this case, they of course, are the voracious advertising companies[1]. In fact, because so many users are actually clearing cookies these days, marketing companies have begun developing fingerprints to track you even if your cookies and IP change: https://wiki.mozilla.org/Fingerprinting. Torbutton blocks most of these, but work needs to be done to block more. So for me, Tor is about cutting that crap off at the bud. If I must be strip-searched at the airport (digitally or not), and have my airline ticket purchase IP recorded at the DHS[2], at the very least they will not correlate that with my other Internet activity. In fact, you could take the paypal conspiracy one further, in that they also don't like many forms of prepaid gift card use. They are simply not interested in collecting information that contains any noise what-so-ever... 1. http://techcrunch.com/2009/11/06/zynga-scamville-mark-pinkus-faceboo/ 2. http://current.newsweek.com/budgettravel/2008/12/whats_in_your_government_trave.html -- Mike Perry Mad Computer Scientist fscked.org evil labs pgpzw9LklxVjT.pgp Description: PGP signature
Re: blutmagie TNS / v0.2.2.15 nodes
Hi Olaf, On 8/25/10 12:10 PM, Olaf Selke wrote: blutmagie Tor network status site apparently displays incorrect bandwidth values for all nodes running version 0.2.2.15. Unlike other tns sites blutmagie calculated bw as an average from the extra-info data instead of using the bw peak value. So far I don't have a clue what's going wrong. The extra-info format might have changed or my Perl script populating the mysql db might be buggy. Blutmagie4 which is is running v0.2.2.14 for testing purpose still shows up with the correct bw http://torstatus.blutmagie.de/index.php?SR=BandwidthSO=Desc. All 0.2.2.15 nodes like trusted, teunTest, or the other three blutmagie nodes are displayed with a bw being obviously much too low. This might be related to: Changes in version 0.2.2.15-alpha - 2010-08-18 - Relays report the number of bytes spent on answering directory requests in extra-info descriptors similar to {read,write}-history. Implements enhancement 1790. There are now two new lines dirreq-read-history ... and dirreq-write-history ... containing the bytes spent on the dir protocol. Maybe TNS greps for read-history and not ^read-history when parsing descriptors? I'll have more time to investigate this tomorrow. Please let me know if you find something interesting in the meantime. Thanks, --Karsten *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: blutmagie TNS / v0.2.2.15 nodes
Le mercredi 25 août 2010 12.35:45, vous avez écrit : Hi Olaf, On 8/25/10 12:10 PM, Olaf Selke wrote: blutmagie Tor network status site apparently displays incorrect bandwidth values for all nodes running version 0.2.2.15. Unlike other tns sites blutmagie calculated bw as an average from the extra-info data instead of using the bw peak value. So far I don't have a clue what's going wrong. The extra-info format might have changed or my Perl script populating the mysql db might be buggy. Blutmagie4 which is is running v0.2.2.14 for testing purpose still shows up with the correct bw http://torstatus.blutmagie.de/index.php?SR=BandwidthSO=Desc. All 0.2.2.15 nodes like trusted, teunTest, or the other three blutmagie nodes are displayed with a bw being obviously much too low. This might be related to: Changes in version 0.2.2.15-alpha - 2010-08-18 - Relays report the number of bytes spent on answering directory requests in extra-info descriptors similar to {read,write}-history. Implements enhancement 1790. There are now two new lines dirreq-read-history ... and dirreq-write-history ... containing the bytes spent on the dir protocol. Maybe TNS greps for read-history and not ^read-history when parsing descriptors? I'll have more time to investigate this tomorrow. Please let me know if you find something interesting in the meantime. Thanks, --Karsten *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/ Hello, Thanks for your post, i was running TorTeamHelp with 0.2.2.15-alpha-dev and using extra info to sending stats and from that it's appear that my bandwitch was show as 4 KB instead 400 KB. So i was surprised to see average so low... :P Have a great day *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Google and Tor.
On numerous occasions when using Google with Tor (yes, I know there are other options like Scroogle) it claims I might be sending automated queries and gives me a CAPTCHA. Sometimes this allows me to search; other times I am caught in a loop and am constantly send back to the CAPTCHA screen. I am wondering why Google does not deal with this. I can understand that if dozens of people are using the same IP then some sites think zombies are being used. But if the IP is a Tor node then this is not the case. Google could surely exclude these Tor IPs. So my question is: why don't they? What are the politics behind their decision not to acknowledge Tor exit nodes as bona fide?
Re: Google and Tor.
On 25/08/10 15:38, Gregory Maxwell wrote: On Wed, Aug 25, 2010 at 6:28 AM, Matthewpump...@cotse.net wrote: On numerous occasions when using Google with Tor (yes, I know there are other options like Scroogle) it claims I might be sending automated queries and gives me a CAPTCHA. Sometimes this allows me to search; other times I am caught in a loop and am constantly send back to the CAPTCHA screen. I am wondering why Google does not deal with this. I can understand that if dozens of people are using the same IP then some sites think zombies are being used. But if the IP is a Tor node then this is not the case. Google could surely exclude these Tor IPs. So my question is: why don't they? What are the politics behind their decision not to acknowledge Tor exit nodes as bona fide? Really? This isn't obvious? Would I have asked if it was obvious? People are running automated datamining queries _via tor_ in order to gain control of more IPs and avoid being blocked. What is a datamining query exactly? Is this what I would call typing some text into the search box and pressing enter? And how does entering a datamining query allow one to gain control of more IPs? And being blocked - from what? Totally confused. Even if they weren't, they'd certainly start if Google exempted tor exits. *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Google and Tor.
On Wed, Aug 25, 2010 at 11:31 AM, Matthew pump...@cotse.net wrote: People are running automated datamining queries _via tor_ in order to gain control of more IPs and avoid being blocked. What is a datamining query exactly? Is this what I would call typing some text into the search box and pressing enter? And how does entering a datamining query allow one to gain control of more IPs? And being blocked - from what? Totally confused. For example— a friend of mine was querying google maps to find out their estimated travel time between every pair of US cities over some size threshold. After about a month of this they blocked her IP and she moved to using tor, spreading the traffic across many exits (which, as far as I know they never ended up blocking). People do bulk google queries to look for sites to spam (e.g. by googling for UI elements from wiki software plus keywords useful for their spammish purposes). These are the datamining things I was referring to. Another example, some people have operated fake search engines which do nothing but serve their own ads/malware and then direct the real queries back to google. I'm sure that there is a ton of potentially abusive behaviour which I've never seen or thought of but which google is aware of. I think it would be nice if captchas and blocking weren't the only anti-DOS/anti-abuse mechanisms used on the web today, but this is the world we live in. *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: blutmagie TNS / v0.2.2.15 nodes
Am 25.08.2010 12:35, schrieb Karsten Loesing: On 8/25/10 12:10 PM, Olaf Selke wrote: blutmagie Tor network status site apparently displays incorrect bandwidth values for all nodes running version 0.2.2.15. Unlike other tns sites blutmagie calculated bw as an average from the extra-info data instead of using the bw peak value. [...] This might be related to: Changes in version 0.2.2.15-alpha - 2010-08-18 - Relays report the number of bytes spent on answering directory requests in extra-info descriptors similar to {read,write}-history. Implements enhancement 1790. There are now two new lines dirreq-read-history ... and dirreq-write-history ... containing the bytes spent on the dir protocol. Maybe TNS greps for read-history and not ^read-history when parsing descriptors? yes exactly! And cause the dirreq-history data lines are displayed behind the ordinary read/write history data, the script summed up the dirreq bandwidth. Thus blutmagie tns mistakenly displayed the (correct) directory request bandwidth for all 0.2.2.15 nodes. This is fixed now. regards Olaf *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Translation update
Hi, Runa A. Sandvik wrote (24 Aug 2010 18:48:58 GMT) : Let me know if you have any questions and/or suggestions. T(A)ILS is the result of the merge of the Incognito and Amnesia Live systems; It is listed on the Tor Projects page [1]. Is it imaginable to add the T(A)ILS website [2] to the Tor project's Pootle instance? Rationale: on the one hand our website already offers a translation interface based on PO files; on the other hand, some of our translators have expressed the need for a better UI. They are used to Pootle's one. They would love it if the Improve translation link present on almost every page of our site did point to Pootle. Technical details: our website is based on ikiwiki and is managed with Git. We can provide the needed access via ssh keys to enable Pootle to read from and write to our Git repository. From the Pootle version control documentation [3] I can infer our website's layout falls into the special directory layouts category that needs a bunch of symlinks to emulate the standard Pootle directory layout. I'm sure we can easily provide a script that creates and maintains the needed symlinks in a target directory that Pootle could work on. Our Git repository additionally contains PO files for a few programs that are specific to T(A)ILS. I guess they also could be translated using Pootle provided the symlinks are properly maintained. Thoughts and comments welcome both on technical and non-technical matters. [1] https://www.torproject.org/projects/ [2] https://amnesia.boum.org/ [3] http://translate.sourceforge.net/wiki/pootle/version_control#how_to_treat_special_directory_layouts Bye, -- intrigeri intrig...@boum.org | GnuPG key @ https://gaffer.ptitcanardnoir.org/intrigeri/intrigeri.asc | OTR fingerprint @ https://gaffer.ptitcanardnoir.org/intrigeri/otr-fingerprint.asc pgpfUL3gOe5Ww.pgp Description: PGP signature
Re: blutmagie TNS / v0.2.2.15 nodes
Le Wed, 25 Aug 2010 18:49:12 +0200, Olaf Selke olaf.se...@blutmagie.de a écrit : Am 25.08.2010 12:35, schrieb Karsten Loesing: On 8/25/10 12:10 PM, Olaf Selke wrote: blutmagie Tor network status site apparently displays incorrect bandwidth values for all nodes running version 0.2.2.15. Unlike other tns sites blutmagie calculated bw as an average from the extra-info data instead of using the bw peak value. [...] This might be related to: Changes in version 0.2.2.15-alpha - 2010-08-18 - Relays report the number of bytes spent on answering directory requests in extra-info descriptors similar to {read,write}-history. Implements enhancement 1790. There are now two new lines dirreq-read-history ... and dirreq-write-history ... containing the bytes spent on the dir protocol. Maybe TNS greps for read-history and not ^read-history when parsing descriptors? yes exactly! And cause the dirreq-history data lines are displayed behind the ordinary read/write history data, the script summed up the dirreq bandwidth. Thus blutmagie tns mistakenly displayed the (correct) directory request bandwidth for all 0.2.2.15 nodes. This is fixed now. regards Olaf *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/ hi Olaf, Thanks to have fixed it ! i comfirm that is fixed for me :D have a great day . SwissTorHelp *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Exit poul censoring sites
Would the owner of exit Poul (B8EB 1587 F2C8 7E3D C05A 08E7 A68F 375B 5B23 368F) please turn off OpenDNS URL blacklisting. -- http://www.fastmail.fm - IMAP accessible web-mail *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Google and Tor.
Gregory Maxwell wrote: On Wed, Aug 25, 2010 at 11:31 AM, Matthew pump...@cotse.net wrote: People are running automated datamining queries _via tor_ in order to gain control of more IPs and avoid being blocked. I think it would be nice if captchas and blocking weren't the only anti-DOS/anti-abuse mechanisms used on the web today, but this is the world we live in. While I usually use scroogle or ixquick, on occasion I do a google query. Sometimes it works, frequently it is blocked. When they give me a captcha, I've learned to just give up right then (or maybe try with a new exit node). I have never had a successful result with a Google captcha ... it just keeps giving me new ones. So while your explanation for blocking makes sense, it doesn't explain why they don't fix their capthca. (Maybe it's tied to cookies, but I'm not going to allow google cookies for that one instance only to disable them again.) I realize there is nothing anybody on this list can do (unless a Google employee subscribes to the list). I'm just venting ... Cheers, Jim *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Google and Tor.
Thus spake Matthew (pump...@cotse.net): On numerous occasions when using Google with Tor (yes, I know there are other options like Scroogle) it claims I might be sending automated queries and gives me a CAPTCHA. Sometimes this allows me to search; other times I am caught in a loop and am constantly send back to the CAPTCHA screen. This has been a known problem with Google for ages. There are numerous ways we could improve this situation without requiring blanket exemptions for Tor Exits (such as client side puzzles, or more intelligent rate limiting algorithms that are more tolerant of our typically cookieless but legitimate users coming in large masses from the same IP). Unfortunately the DoS team at Google is unwilling to work with us to find alternate ways of limiting these captchas at the moment. Tor has many friends inside Google, but sadly the DoS team is independent enough from the rest of Google that regardles of Google's opinion of Tor or censorship circumvention, the DoS team is unwilling to devote any development resources to improving this problem, and have declined even meeting with us directly :( Astute students of human nature will note that this is the result you expect when you place a small group of people in a position of unassaillable control of a resource for security reasons... Our current solution is to automatically redirect Google Captcha requests to alternate search engines such as ixquick, scroogle, yahoo, or bing. This feature was introduced in Torbutton 1.2.5 and uses ixquick by default. However, Google's recent switch to using encrypted.google.com for SSL search caused our captcha detection code to break in Torbutton. So if you are using encrypted search and/or HTTPS Everywhere, your captchas will no longer be seamlessly redirected. This should be fixed in Torbutton 1.2.6. -- Mike Perry Mad Computer Scientist fscked.org evil labs pgpNcDwOSw9Vh.pgp Description: PGP signature
Re: Google and Tor.
On 8/25/2010 8:52 PM, Mike Perry wrote: Thus spake Matthew (pump...@cotse.net): On numerous occasions when using Google with Tor (yes, I know there are other options like Scroogle) it claims I might be sending automated queries and gives me a CAPTCHA. Sometimes this allows me to search; other times I am caught in a loop and am constantly send back to the CAPTCHA screen. This has been a known problem with Google for ages. (snip) Really? I've never had this problem until recently. For about 2 years now every Google CAPTCHA I've run into has been uneventful and let me through after the first try, only in the past month or so have I been getting caught in the CAPTCHA loop. ~Justin Aplin *** To unsubscribe, send an e-mail to majord...@torproject.org with unsubscribe or-talkin the body. http://archives.seul.org/or/talk/
Re: Google and Tor.
Thus spake Aplin, Justin M (jmap...@ufl.edu): On 8/25/2010 8:52 PM, Mike Perry wrote: Thus spake Matthew (pump...@cotse.net): On numerous occasions when using Google with Tor (yes, I know there are other options like Scroogle) it claims I might be sending automated queries and gives me a CAPTCHA. Sometimes this allows me to search; other times I am caught in a loop and am constantly send back to the CAPTCHA screen. This has been a known problem with Google for ages. (snip) Really? I've never had this problem until recently. For about 2 years now every Google CAPTCHA I've run into has been uneventful and let me through after the first try, only in the past month or so have I been getting caught in the CAPTCHA loop. Various horrible behaviors have come and go with this captcha system over the past 3 years or so. Sometimes you just get a 403 with no captcha, sometimes you have to solve a captcha, sometimes 2 captchas, sometimes infinite captchas, and sometimes it forgets your query and you have to start the whole process over again from a Google landing page. My point is that the whole system is problematic on a number of levels. I also personally believe that there are better ways of rate limiting and screening queries from high-user count IPs that do not involve cookies or captchas. I also question Google's threat model on this feature. Sure, they want to stop people from programmatically re-selling Google results without an API key in general, but there is A) no way people will be reselling Tor-level latency results, B) no way they can really expect determined competitors not to do competitive analysis of results using private IP ranges large enough to avoid DoS detection, C) no way that the total computational cost of the queries coming from Tor can justify denying so many users easy access to their site. This is why I'd love a chance to meet with the DoS team to discuss some of these points. However, I get the strong impression it is a very secretive group that is especially wary of discussing their methods, reasoning, or analysis and with anyone else, and is generally given a blank check to enact policy without proper in-depth cost/benefit analsysis because its actions are for security. -- Mike Perry Mad Computer Scientist fscked.org evil labs pgpGvcbwzdUPv.pgp Description: PGP signature
Re: Google and Tor.
On Wed, 25 Aug 2010 20:04:01 -0700 Mike Perry mikepe...@fscked.org wrote: I also question Google's threat model on this feature. Sure, they want to stop people from programmatically re-selling Google results without an API key in general, but there is A) no way people will be reselling Tor-level latency results, B) no way they can really expect determined competitors not to do competitive analysis of results using private IP ranges large enough to avoid DoS detection, C) no way that the total computational cost of the queries coming from Tor can justify denying so many users easy access to their site. If Tor exit nodes were allowed to bypass Google's CAPTCHA, someone could put up a low-bandwidth Tor exit node and then send their own automated queries directly to Google from their Tor exit's IP. Robert Ransom signature.asc Description: PGP signature