Re: [tor-dev] Statistics on fraction of connections used uni-/bidirectionally

2013-12-22 Thread Karsten Loesing
On 12/21/13 6:41 PM, Rob Jansen wrote:
 I think your newest graph (the one with the three median+range plots on the 
 same graph) is the best, and would be happy if we switched to that one.

Great!  Glad you like the new graph.  I just deployed it:

https://metrics.torproject.org/performance.html#connbidirect

Thanks again for your feedback!

All the best,
Karsten

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Review of Proposal 215: Let the minimum consensus method change with time (was: Tor proposal status (December 2013))

2013-12-22 Thread Karsten Loesing
On 12/17/13 10:31 PM, Nick Mathewson wrote:
 215  Let the minimum consensus method change with time
 
  This proposal describes how we can raise the minimum
  allowable consensus method that all authorities must
  support, since the ancient consensus method 1 would not
  actually be viable to keep the Tor network running.  We
  should do this; see ticket #10163. (11/2013)

Hi Nick,

I'm probably missing something important here, but I don't know what.

Right now, if a directory authority learns from the votes that more than
2/3 of authorities support a consensus method higher that it can support
itself, it falls back to consensus method 1.  That authority then
produces a consensus that won't have enough signatures for any client to
use it, so it's useless.

The proposal suggests that this authority produces a consensus using a
higher method than 1, but still lower than what the other authorities
are going to produce.  But this consensus will still not contain enough
signatures.

What's the point?

The last paragraph in the proposal makes most sense to me:

 We might want to have the behavior when we see that everybody else
 will be using a method we don't support be Don't make a consensus
 at all.  That's harder to program, though.

Can you say why this solution is harder to program?  It seems like the
cleaner design.

But even if it's too difficult to program (or would likely add new
bugs), why not keep the fall-back-to-method-1 workaround?  Does it cause
any harm?

There are probably edge cases I didn't consider.  I wonder which ones
that are.

All the best,
Karsten

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] bridgdb automation

2013-12-22 Thread isis agora lovecruft
Matthew Finkel transcribed 8.6K bytes:
 On Thu, Dec 19, 2013 at 02:52:03AM +0100, Nicolas Vigier wrote:
  On Tue, 17 Dec 2013, isis agora lovecruft wrote:
  
   Nicolas Vigier transcribed 1.4K bytes:
 
 Hi  Nicolas!
 
 Thanks again for following up on this!
 
   
   Just in case you haven't seen it, Lunar made a wiki page which has quite 
   a bit
   of info on it, and I filled in some more on BridgeDB. [0]
  
  Yes, Lunar showed me this page, and we used it when he gave me a summary
  of what each of the projects do.
  
   
   aabgsn maintained BridgeDB for a year or so, but no longer works on it 
   (though
   they are more than welcome to do so, if they wish to). sysrqb has been 
   helping
   me maintain BridgeDB quite a bit (feel free to CC them on BridgeDB 
   topics).
   
I am currently looking at the status and list of things to be done
regarding automation on tor project. I have been looking at bridgedb :
https://people.torproject.org/~boklm/automation/tor-automation-review.html#_bridgedb
   
   From that page:
   
Continuous Build
BridgeDB is not currently built and tested by Jenkins.

However, Isis Lovecruft has a personnal development fork on github that 
is
built and tested by travis-ci.org:
https://travis-ci.org/isislovecruft/bridgedb/ 
   
Packaging
BridgeDB does not have packages. It is currently deployed using a 
Python virtualenv.
   
   
   To my knowledge, BridgeDB is not currently deployed in a virtualenv 
   (sysrqb
   was the last to redeploy it). I recently refactored the main loop and 
   scripts
   so that it *can* run in a virtualenv, and it *should* be run in one, 
   because:
   
 1. We won't need to nag weasel/Sebastian to update/install BridgeDB 
   dependencies.
 2. Dependencies will not be installed via sudo.
 
 This sounds advantageous. It's currently running with unmodified PATH,
 PYTHONPATH, etc. env vars using the existing scripts to install and
 run it. It doesn't install under /usr, so the normal user can
 install it.
 
  
  I'm not familiar yet with the process to maintain *.tpo services, and
  what part is done by sysadmin team, and what part can be done by
  maintainer of a service, like installing dependencies or other operations
  that require root access. Do you (or someone else reading this) have more
  details about this ?
  
 
 The general rule I've determined is that if packages are being installed
 or upgraded then it's sysadmin, similarly if something is owned by root.
 Otherwise the group is responsible for it. I'm probably missing
 something important, though.
 
   
   I've been considering creating packages for BridgeDB on PyPI.
   
 Pros:
   * Even if we manually download the bundle, verify the hash, and then
 install it, this seems potentially easier and less error-prone than
 checking out a git tag, verifying it, and then building.
   * Packaging it now reserves the 'bridgedb' Python namespace for our 
   use.
   
 Cons:
   * I don't want to make people think that this thing is a polished
 distribution system for people who wish to run their own 
   BridgeAuths.
 
 1) I don't think we really need to worry about this :)
 2) Please don't deploy this yourself. But, if you do, deploy
 carefully, this project is under heavy development
 
   
   If proper packaging is helpful for Jenkins, however, I can easily do so.
  
  An idea could be to have a Debian package for bridgedb, and make Jenkins
  update the packages in a repository automatically when there are new
  commits.
  
 
 For our purpose I think debian packages are a bit overkill, but I have
 nothing against creating them if it will make testing and deployment
 easier.
 

I am moderately to strongly against using Debian's packaging system for Python
things, because it is perpetually outdated, combined with the Python Software
Foundation's complete disregard for the standard packaging concept of
backporting patches. When something breaks in Python, they fix it in an
upcoming release. If you complain that something is broken which is fixed in a
newer release than the Python version you're using, the Python devs will tell
you to upgrade.

Debian sid's version is currently 2.7.5-5. Which is outdated. 2.7.6 was
released two weeks ago. Wheezy is even worse; it's nearly two years
outdated. Briefly skimming it, I can point at roughly 30 bugs in the Python
release changelog [0] which BridgeDB will likely hit, if we use the wheezy
version (which we are using). Several of those bugs due to using ancient,
deprecated, OpenSSL API features, and other rather severe SSL bugs, one of
which was a recent CVE. (CVE-2013-4238) [1]

[0]: http://hg.python.org/cpython/raw-file/99d03261c1ba/Misc/NEWS
[1]: https://security-tracker.debian.org/tracker/CVE-2013-4238

What is more important, and what I would *really* prefer not to fight, is the
inevitable slew of horrific glitches and hiccups which will occur from using

Re: [tor-dev] [Question to sysadmins and HS operators:] How should Hidden Services scale?

2013-12-22 Thread George Kadianakis
Also forwarding George's message. The original thread had a wrong address
for tor-dev, and all their messages are not posted in tor-dev...

George Kargiotakis said:
 On Fri, 20 Dec 2013 11:58:27 -0500
 and...@torproject.org wrote:

  On Fri, Dec 20, 2013 at 03:08:01AM -0800, desnac...@riseup.net wrote
  1.7K bytes in 0 lines about: : For this reason we started wondering
  whether DNS-round-robin-like : scalability is actually worth such
  trouble. AFAIK most big websites : use DNS round-robin, but is it
  necessary? What about application-layer : solutions like HAProxy? Do
  application-layer load balancing solutions : exist for other
  (stateful) protocols (IRC, XMPP, etc.)?
 
  In my experience in running large websites and services, we didn't use
  DNS round-robin. If large sites do it themselves, versus outsourcing
  it to a content delivery network, they look into anycast, geoip-based
  proxy servers, or load balancing proxy servers (3DNS/BigIP,
  NetScalar, etc) DNS round-robin is for smaller websites which want to
  simply spread the load across redundant servers--this is what tor
  does now.
 
  If scaling hidden services is going to be a large challenge and
  consume a lot of time, it sounds like making HS work more reliably
  and with stronger crypto is a better return on effort. The simple
  answer for scaling has been to copy around the private/public keys
  and host the same HS descriptors on multiple machines. I'm not sure
  we have seen a popular enough hidden service to warrant the need for
  massive scaling now.
 
  Maybe changing HAProxy to support .onion links is a fine option too.
 

 Hello all,

  For a while we've been told that hidden services don't scale and
  there is a max number of clients that a hidden service can handle
  so we decided to also consider hidden service scalability as part of
  the upcoming redesign. Unfortunately, we are not experienced in
  maintaining busy hidden services so we need some help here.

 to solve a problem you need to strictly define it first. Where exactly
 is the bottleneck here ? I've never run a .onion that couldn't scale
 because of many clients visiting, so I don't have a first hand
 experience with such issues. If it's because it's slow to open many
 connections to hidden services then imho simply adding an .onion-aware
 HAProxy/varnish won't solve these problems in the long run. There will
 be a time where one HAProxy/varnish won't be enough and it will always
 be a SPOF.

 Most big websites do geoip (to distribute the load between DCs in
 different regions), then they do something like HAProxy/LVS to
 spread the load across multiple workers in the same DC, and of course
 they put static files on CDNs.

 Each of the above serves quite a different purpose. Reducing latency
 through geoip, load-balancing and graceful fail-over with LVS/HAProxy
 and CDNs are doing both at the same time but for different types of
 requests.

 Since geoip does not make sense in the Tor world, maybe making
 multiple hosts advertise the same .onion address at the same time in
 the database would make some sense. If that were true, people could also
 implement .onion CDN services. I'm not so sure what can be done for an
 LVS-like setup in the Tor world though.

 I hope this helps a tiny bit.

 Regards,
 --
 George Kargiotakis
 https://void.gr
 GPG KeyID: 0x897C03177011E02C


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] OnionMail 1.0.48B is Out.

2013-12-22 Thread Liste
OnionMail 1.0.48B is now Ready.

http://onionmail.onfo

Now OnionMail can use GPG

(We are upgrading some OnionMail Servers).

The source code is on github:
https://github.com/onionmail/onionmail

If you don't want to connect only via tor, yo ca use an hybrid
solution: The NTU LocalProxy

https://github.com/onionmail/ntu

It can hide more hiddenservices.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev