Re: [tor-dev] Set up Tor private network

2016-02-25 Thread Tim Wilson-Brown - teor

> On 26 Feb 2016, at 06:25, Tom Ritter  wrote:
> 
> On 25 February 2016 at 21:00, SMTP Test  wrote:
>> Hi all,
>> 
>> I try to set up a Tor private network. I found two tutorials online
>> (http://liufengyun.chaos-lab.com/prog/2015/01/09/private-tor-network.html
>> and https://ritter.vg/blog-run_your_own_tor_network.html) but seems that
>> they both are outdated. Could anyone please give me a tutorial or some hints
>> on building a private Tor network?
> 
> Can you explain what you ran into that was outdated or wasn't working?
> While time marches on and tor is not quite the same as when I wrote
> that - I'm not sure what would have completely broken since then…

Another option is to use chutney to autoconfigure a test tor network on your 
local machine.
But it can be hard to use and hard to work out what's broken if it doesn't work.
https://gitweb.torproject.org/chutney.git/tree/README 

>> Another question is: what is the minimum
>> number of required directory authorities for a private Tor network? I am
>> wondering if one directory authority is enough.
> 
> I never tested with 1. I know 3 works.

1 works fine. But there's no redundancy if it stops working.
(Even numbers are avoided because they run the risk of consensus ties: half 
vote one way, half vote another, and there is no majority consensus about 
certain information, or the entire network state.)

Tim

Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP 968F094B

teor at blah dot im
OTR CAD08081 9755866D 89E2A06F E3558B7F B5A9D14F



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Set up Tor private network

2016-02-25 Thread Tom Ritter
On 25 February 2016 at 21:00, SMTP Test  wrote:
> Hi all,
>
> I try to set up a Tor private network. I found two tutorials online
> (http://liufengyun.chaos-lab.com/prog/2015/01/09/private-tor-network.html
> and https://ritter.vg/blog-run_your_own_tor_network.html) but seems that
> they both are outdated. Could anyone please give me a tutorial or some hints
> on building a private Tor network?

Can you explain what you ran into that was outdated or wasn't working?
 While time marches on and tor is not quite the same as when I wrote
that - I'm not sure what would have completely broken since then...

> Another question is: what is the minimum
> number of required directory authorities for a private Tor network? I am
> wondering if one directory authority is enough.

I never tested with 1. I know 3 works.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Set up Tor private network

2016-02-25 Thread SMTP Test
Hi all,

I try to set up a Tor private network. I found two tutorials online (
http://liufengyun.chaos-lab.com/prog/2015/01/09/private-tor-network.html
and https://ritter.vg/blog-run_your_own_tor_network.html) but seems that
they both are outdated. Could anyone please give me a tutorial or some
hints on building a private Tor network? Another question is: what is the
minimum number of required directory authorities for a private Tor network?
I am wondering if one directory authority is enough.

Thanks
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stopping the censoring of tor users.

2016-02-25 Thread blacklight .
About the issue of exit nodes needing to know to which bridge they need to
connect to,  could we not make a system that similair to hidden services,
so that the nodes can connect to them without knowing the actulle ip
adress? If we could design an automatic system in which flash proxies could
be configered like that, then it might work i think, what are your thoughts?
Op 25 feb. 2016 22:37 schreef "Thom Wiggers" :

> You may be interested in the following from the FAQ:
>
> https://www.torproject.org/docs/faq.html.en#HideExits
>
> You should hide the list of Tor relays, so people can't block the exits.
>
> There are a few reasons we don't:
>
> a) We can't help but make the information available, since Tor clients
> need to use it to pick their paths. So if the "blockers" want it, they can
> get it anyway. Further, even if we didn't tell clients about the list of
> relays directly, somebody could still make a lot of connections through Tor
> to a test site and build a list of the addresses they see.
> b) If people want to block us, we believe that they should be allowed to
> do so. Obviously, we would prefer for everybody to allow Tor users to
> connect to them, but people have the right to decide who their services
> should allow connections from, and if they want to block anonymous users,
> they can.
> c) Being blockable also has tactical advantages: it may be a persuasive
> response to website maintainers who feel threatened by Tor. Giving them the
> option may inspire them to stop and think about whether they really want to
> eliminate private access to their system, and if not, what other options
> they might have. The time they might otherwise have spent blocking Tor,
> they may instead spend rethinking their overall approach to privacy and
> anonymity.
>
> On 25/02/16 20:04, blacklight . wrote:
>
> hello there! i don't know if this mailing list works but i thought of
> giving it a try.
>
> i was lately reading an article (
> http://www.pcworld.com/article/3037180/security/tor-users-increasingly-treated-like-second-class-web-citizens.html
> )
>  and it was about tor users getting blocked from accessing alot of
> website. but after giving this some thought i think i came up with a
> possible solution to the problem :there is a thing called bridges, they are
> used to access the tor network without your isp knowing that you use tor,
> but if you can use those proxies to enter the network, it might also be
> possible to exit the network with them. But then we face a second
> challenge, the exit nodes have to be configured in such a way that it will
> relay traffic to such a bridge, so the exit node owners also need to know
> the ip of the bridge. While this doesn't seem difficult to do, it can
> become difficult. You see if the bridges are published on a public
> list(like normal bridges are) then the blocking sites in question will be
> able to block those address too. While this also posses a problem, a
> possible solution could be found in something called flashproxies,
> flashproxies are bridges with a really short life span, they are created
> and destroyed fairly swiftly, when this is done in a rapid pace, they
> become really hard to block because the ip changes all the time. So if the
> exit nodes can be configured to make use of such flash proxies, then the
> problem could be solved. I Must admit that not an expert on this or
> anything, and it needs alot of more thought, but it could work. so i was
> wondering if there are any experts who could help me with thinking out this
> subject and maybe confirm if this idea could work.
>
>
> greetings, blacklight
>
>
> ___
> tor-dev mailing 
> listtor-dev@lists.torproject.orghttps://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stopping the censoring of tor users.

2016-02-25 Thread Thom Wiggers
You may be interested in the following from the FAQ:

https://www.torproject.org/docs/faq.html.en#HideExits

You should hide the list of Tor relays, so people can't block the exits.

There are a few reasons we don't:

a) We can't help but make the information available, since Tor clients
need to use it to pick their paths. So if the "blockers" want it, they
can get it anyway. Further, even if we didn't tell clients about the
list of relays directly, somebody could still make a lot of connections
through Tor to a test site and build a list of the addresses they see.
b) If people want to block us, we believe that they should be allowed to
do so. Obviously, we would prefer for everybody to allow Tor users to
connect to them, but people have the right to decide who their services
should allow connections from, and if they want to block anonymous
users, they can.
c) Being blockable also has tactical advantages: it may be a persuasive
response to website maintainers who feel threatened by Tor. Giving them
the option may inspire them to stop and think about whether they really
want to eliminate private access to their system, and if not, what other
options they might have. The time they might otherwise have spent
blocking Tor, they may instead spend rethinking their overall approach
to privacy and anonymity.

On 25/02/16 20:04, blacklight . wrote:
> hello there! i don't know if this mailing list works but i thought of
> giving it a try.
>
> i was lately reading an article
> (http://www.pcworld.com/article/3037180/security/tor-users-increasingly-treated-like-second-class-web-citizens.html)
>  and it was about tor users getting blocked from accessing alot of
> website. but after giving this some thought i think i came up with a
> possible solution to the problem :there is a thing called bridges,
> they are used to access the tor network without your isp knowing that
> you use tor, but if you can use those proxies to enter the network, it
> might also be possible to exit the network with them. But then we face
> a second challenge, the exit nodes have to be configured in such a way
> that it will relay traffic to such a bridge, so the exit node owners
> also need to know the ip of the bridge. While this doesn't seem
> difficult to do, it can become difficult. You see if the bridges are
> published on a public list(like normal bridges are) then the blocking
> sites in question will be able to block those address too. While this
> also posses a problem, a possible solution could be found in something
> called flashproxies, flashproxies are bridges with a really short life
> span, they are created and destroyed fairly swiftly, when this is done
> in a rapid pace, they become really hard to block because the ip
> changes all the time. So if the exit nodes can be configured to make
> use of such flash proxies, then the problem could be solved. I Must
> admit that not an expert on this or anything, and it needs alot of
> more thought, but it could work. so i was wondering if there are any
> experts who could help me with thinking out this subject and maybe
> confirm if this idea could work.
>
>
> greetings, blacklight
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Fwd: Re: Support for mix integration research

2016-02-25 Thread Sebastian G.
I'm forwarding this to the list, because it might contain relevant
information. Mostly within George's reply.

Initial reason to not send my reply to the list was to not bother too
many people with seemingly irrelevant questions on my part.

I agree with the reply given in the forwarded message. It has to be
considered what 'it' does to the network.

Best Regards,
Sebastian G. (bastik)


-- Forwarded Message --
Subject: Re: [tor-dev] Support for mix integration research
Date: Thu, 25 Feb 2016 19:58:56 +0100
From: George Kadianakis 
To: Sebastian G.  

"Sebastian G. "  writes:

> 25.02.2016, 13:08 George Kadianakis:
>> b) Does this increase the hard disk or memory requirement of people running 
>> Tor
>>relays? That is in high-latency mode, will Tor relays need to store 
>> client's
>>traffic for longer?  How does that impact the memory and hard disk
>>requirement of people running Tor relays?
>
> Hello,
>
> this has been intentionally sent off list.
>
> I just don't understand the question. How would you delay traffic
> without storing it for longer? Whenever in memory or disk, the data that
> are being delayed have to be stored somewhere, as far as I understand.
>
> Could one delay traffic on a wire? E.g. like with a truck that drives on
> a road between two storage facilities where you couldn't say it is
> already stored in the facilities.
>
> I fail to see how it would be possible to have a system that delays
> something without holding back whatever is delayed.
>

I agree.

I'm wondering how this increases the memory and hard disk requirement of
running
a relay. And how much would the current relays contribute to such a
high-latency
system? And how much memory / CPU will be required to contribute
meaningfully to
this. And what about DDoS opportunities?

All these things will need to be thought of so that we don't overwhelm the
current Tor network by introducing this high-latency mode. Because if
that's the
case, we should probably have a separate network for the high-latency mode.

Pity you replied off list, this conversation is of sufficient interest.

Cheers.

> If you currently take in x and give out y at the same rate, what needs
> to be stored is what has to be buffered. If now the rate of y decreases
> while the rate for x remains the same, more stuff needs be buffered.
> Unless you reduce the rate for x, more storage is required.
>
> Sorry, I just don't get it. Please show me where my error is. Can be
> short. :)
>



___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] stopping the censoring of tor users.

2016-02-25 Thread blacklight .
hello there! i don't know if this mailing list works but i thought of
giving it a try.

i was lately reading an article (
http://www.pcworld.com/article/3037180/security/tor-users-increasingly-treated-like-second-class-web-citizens.html
)
 and it was about tor users getting blocked from accessing alot of website.
but after giving this some thought i think i came up with a possible
solution to the problem :there is a thing called bridges, they are used to
access the tor network without your isp knowing that you use tor, but if
you can use those proxies to enter the network, it might also be possible
to exit the network with them. But then we face a second challenge, the
exit nodes have to be configured in such a way that it will relay traffic
to such a bridge, so the exit node owners also need to know the ip of the
bridge. While this doesn't seem difficult to do, it can become difficult.
You see if the bridges are published on a public list(like normal bridges
are) then the blocking sites in question will be able to block those
address too. While this also posses a problem, a possible solution could be
found in something called flashproxies, flashproxies are bridges with a
really short life span, they are created and destroyed fairly swiftly, when
this is done in a rapid pace, they become really hard to block because the
ip changes all the time. So if the exit nodes can be configured to make use
of such flash proxies, then the problem could be solved. I Must admit that
not an expert on this or anything, and it needs alot of more thought, but
it could work. so i was wondering if there are any experts who could help
me with thinking out this subject and maybe confirm if this idea could work.


greetings, blacklight
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Consensus Transparency, take two

2016-02-25 Thread Nick Mathewson
On Wed, Feb 24, 2016 at 4:54 PM, Linus Nordberg  wrote:
> Hi,
>
Hi, Linus! This is now proposal 267.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Support for mix integration research

2016-02-25 Thread George Kadianakis
Katharina Kohls  writes:

> Hi everyone,
>
> we are a team of 4 PHD students in the field of IT security, working at
> the Ruhr-University Bochum at the chair for systems security and the
> information security group.
>
> Currently we work on a research project with the goal to leverage the
> security of Tor against timing attacks by integrating mixes in Tor
> nodes. The general idea is to differentiate high-latency and low-latency
> traffic among the network for applying additional delays to the former
> type of packets. Based on this the success of traffic analysis attacks
> should be decreased without restricting the low latency assurance of Tor.
>
> We plan to integrate the mix into Tor version 0.2.5.10 and analyze its
> performance along with the Shadow simulator.
>
> As there are a lot of details to consider, both regarding the technical
> aspects of the integration as well as practical assumptions, e.g., "how
> do we get DiffServ-like nodes?", we would be pleased to receive some
> feedback on the idea and support for the implementation of the mix.
> Further details on the mix and stuff will sure be provided if needed!
>

Hello there,

I'm also very interested in this latency-mixing research problem! We are
currently trying to address these timing attacks by adding padding to our
links. However, defending against these attacks with padding is not easy, and it
also puts additional load to the network.

FWIW, there has been some previous work on this topic but nothing that can be
used currently in a practical setting. For example, see the paper named
"Blending different latency traffic with alpha-mixing" if you haven't
already. The biggest challenge with that paper is switching the message-based
approach to be stream-based (like Tor is).

Other potential papers are "Stop-and-Go-MIX" by Kesdogan et al. and
"Garbled Routing (GR): A generic framework towards unification of
anonymous communication systems" by Madani et al. But I haven't looked
into them at all...

Here are some basic system-agnostic questions about the project:

a) What are the benefits from mixing low-latency Tor traffic with high-latency
   traffic? Is it to leverage the already existing network so that we don't need
   to bootstrap a second trusted anonymizing network?

   On this note, do we actually get any additional security properties from
   mixing low-latency traffic with high-latency traffic? That is, does the
   low-latency traffic provide cover for the high-latency traffic? What about
   the opposite?

   And for what kind of adversaries do these security properties apply? Will
   this actually confuse adversaries who run Tor relays themselves? What about
   network adversaries that monitor the network of Tor relays?

b) Does this increase the hard disk or memory requirement of people running Tor
   relays? That is in high-latency mode, will Tor relays need to store client's
   traffic for longer?  How does that impact the memory and hard disk
   requirement of people running Tor relays?

c) Does this impact the legal liability of people running Tor relays?

Looking forward to learn more information about your project :)

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Next version of the algorithm

2016-02-25 Thread George Kadianakis
Reinaldo de Souza Jr  writes:

> On 2/16/16 12:20, George Kadianakis wrote:
>> The
>> very latest prop259 basically forgets the unreachable guard status as soon as
>> the algorithm terminates. I wonder if we actually want this. Hopefully 
>> guardsim
>> has a simulation scenario that will illustrate whether that's a good idea or
>> not.
>
> We are assuming the unreachable state is persisted (by setting
> unreachable_since in the guard) in the current simulation. We are also
> ignoring entries known to be unreachable when choosing guards from
> either USED_GUARDS or REMAINING_UTOPIC_GUARDS to be our PRIMARY_GUARDS.
>

Thanks for the work on the proposal! I haven't had time to read the new version
yet apart from the stuff you point out in your email.

Making the unreachable state persistent seems like a reasonable idea.

However, I'm not so sure about rejecting previously unreachable guards from
being primary guards. The concept of primary guards is to _always_ try to use
the top N guards in your guardlist (as long as they are in the latest
consensus).  So the list of primary guards should stay the same as long as none
of those nodes disappear from the consensus.

Consider the following case: The network was offline for a few seconds and we
ended up marking our top 8 guards as unreachable. Then the network comes back
up, and we want to build a new circuit: When it's time to choose the primary
guards set we end up choosing the 9th, 10th and 11th node instead of the top 3
guards, even though the top 3 guards are actually online. Ideally Tor will
always connect to the top 3 guards, and this is notthe case here.

Maybe the above issue is not possible with the new prop259 algorithm? If that's
the case, why?

I'm also curious to see how your simulation reacts to this new change.

>> Maybe to reduce this
>> exposure, we should try to go back to STATE_PRIMARY_GUARDS in those cases? 
>> Tor
>> does a similar trick right now which has been very helpful:
>> 
>> https://gitweb.torproject.org/tor.git/tree/src/or/entrynodes.c?id=tor-0.2.7.6#n803
>> 
>> Maybe an equivalent heuristic would be that if we are in STATE_RETRY_ONLY and
>> we manage to connect to a non-primary guard, we hang up the connection, and 
>> go
>> back into STATE_PRIMARY_GUARDS.
>
> Having this in mind, and revisiting
> entry_guard_register_connect_status(), it seems the
> appropriate approach would be marking all USED_GUARDS for retry,
> discarding the current circuit and trying to build a new circuit
> (restart the whole algorithm) rather than marking the PRIMARY_GUARDS and
> going back to STATE_PRIMARY_GUARDS.
>
> This is because by marking all the used guards for retry we can expect
> the next circuit to use the first available USED_GUARD as an entry guard.
>
> In addition to that, my understanding of this behavior in the current
> guard selection algorithm is:
>
> When you succeed to build a circuit with a new guard, you mark all
> previously used guards (the global entry_guards var in tor) for retry
> which are:
>
> - listed in the latest consensus
> - not bad_since some time
> - unreachable_since some time and is "time to retry"*
> - (and it does not have other restrictions which are not important for
> our simulation, like being a bridge when required or fast/stable)
>
> If you find any retryable guard in the condition above, you discard the
> current circuit and try to build a new one:
>
>   - best case: you have a new circuit with one of the previously
> unreachable used guards.
>   - worst case: all previously unreachable used guards marked for retry
> fail again, and you have a new circuit with the same guard you discarded
> before (unless it is also unreachable now :S).
>

I think your evaluation is correct here.

> It's not really "every time you connect to a new guard, retry the
> previously used" but "every time you connect to a new guard, check if
> its time to retry a unreachable previously used guard".
>
> * time to retry is progressive depending on how long the node has been
> unreachable, for example hourly in the first 6 hours, every 4 hours
> until 3 days, etc. (see: entry_is_time_to_retry).
>


>> Can this heuristic be improved? I think it should be considered for 
>> the algorithm.
>
> We have reviewed this strategy in the simulation for the original
> algorithm and we are generating new data. We are also updating the
> proposal to include the described strategy.
>

Great!

BTW, I noticed that the graphs here:
https://github.com/twstrike/tor_guardsim/issues/1
are not really explained, and also the x and y axes are undocumented some times.
I have not had time to read the new tor_guardsim to understand these
graphs. Some documentation would be really helpful here; otherwise, you can
explain these things to me in Valencia.

Thanks!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev