[tor-talk] Looking for information about onion site user deanonymization

2021-05-03 Thread Seth David Schoen
Hi tor-talk,

I'm working as a consultant to a criminal defense lawyer who's
representing a defendant in a case involving Tor and an investigation
by U.S. law enforcement and foreign law enforcement.

In 2019 a foreign law enforcement agency claimed to identify the clearnet
IP addresses of a large number of people who were accessing an onion
site that the agency itself was monitoring or had taken control of.
We know of various methods by which this might be done, but I'm wondering
whether anyone has heard concretely about law enforcement capabilities
or practices in this area if users have not de-anonymized themselves,
or rumors or reports of this being done about two years ago.

Thanks!
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] >170 tor relays you probably want to avoid (Oct 2019 @ Choopa)

2019-10-31 Thread Seth David Schoen
nusenu writes:

> InjureWellprepred
> ChicgoHopeful
> VillgerVenice
> FemleDiffer
> PossibilityCreture
> CrownDutchmn
> BeyondNtionl
> BridegroomDisster
> HrmonyCrown
> NurseryGreement
> RibbonUnderline
> CookbookRoundbout
> SectionPolitics
> PerfectThlete

Very odd naming convention.  It's kind of like

(random.choice(words) + " " + random.choice(words)).replace("a", 
"").title().replace(" ", "")

... why no letter a?

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor to become illegal in Europe?

2019-03-11 Thread Seth David Schoen
hi...@safe-mail.net writes:

> They're basically talking about eliminating criminal activities facilitated 
> online by the darknet, by making Tor and the dark web illegal and 
> inaccessible 
> in Europe.

But this discussion is one politician's view in a keynote address at a
police congress -- which doesn't imply much about police agencies' or
legislators' agreement with this idea.  We've heard similar language in
many countries and it hasn't necessarily led to prohibitions on privacy
tools.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] You Can Now Watch YouTube Videos with Onion Hidden Services

2018-12-05 Thread Seth David Schoen
Seth David Schoen writes:

> if its operator knew a vulnerability in some clients' video codecs,

(or in some other part of Tor Browser, since the proxy can also serve
arbitrary HTTP headers, HTML, CSS, Javascript, JSON, and media files of
various types)

> it could also serve a maliciously modified video to attack them

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] You Can Now Watch YouTube Videos with Onion Hidden Services

2018-12-05 Thread Seth David Schoen
bo0od writes:

> This is another front end to YouTube:

Hi bo0od,

Thanks for the links.

This seems to be in a category of "third-party onion proxy for clearnet
service" which is distinct from the situation where a site operator
provides its own official onion service (like Facebook's facebookcorewwwi,
which the company has repeatedly noted it runs itself on its own
infrastructure).

Could you explain how this kind of design improves users' privacy or
security compared to using a Tor exit node to access the public version
of YouTube?  In this case the proxy will need to act as one side of
users' TLS sessions with YouTube, so it's in a position to directly
record what (anonymous) people are watching, uploading, or writing --
unlike an ordinary exit node which can at most try to infer these
things from traffic analysis.  Meanwhile, it doesn't prevent YouTube
from gathering that same information about the anonymous users, meaning
that this information about users' activity on YouTube can potentially
tbe gathered by wo entities rather than just one.

The proxy could also block or falsely claim the nonexistence of selected
videos, which a regular exit node couldn't do, and if its operator knew
a vulnerability in some clients' video codecs, it could also serve a
maliciously modified video to attack them -- which YouTube could do, but
a regular exit node couldn't.

Are there tradeoffs that make these risks worth it for some set of
users?  Maybe teaching people more about how onion services work, or
showing YouTube that there's a significant level of demand for an
official onion service?

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Post Quantum Tor

2018-05-29 Thread Seth David Schoen
Kevin Burress writes:

> honestly, ideally it would be a lot easier to do things with tor if it
> actually internally followed the unix philosophy and the layers of service
> could be used as a part of the linux system and modular use of the parts. I
> was just looking at BGP routing over tor. I'm not sure how to do that with
> the current implementation over hidden service. I'm having a hard time
> working out how to use it as layer 2 and encapsulate things over the
> network from one hidden service to another.

This is because Tor only provides proxying and exit services at the
transit layer.  You can't route arbitrary IP packets over Tor, and
so you can't, for example, ping or traceroute over Tor.

https://www.torproject.org/docs/faq.html.en#TransportIPnotTCP

Hidden services, for their part, don't even identify destinations with
IP addresses, so there's no prospect of using IP routing protocols to
describe routes to them.

There have been projects to try to make a router that would automatically
proxy all TCP traffic to send it through Tor by default.  (This would
require writing custom code, not just using existing routing tools, again
because Tor only operates at the TCP layer.)  I was excited about this
idea several years ago until the Tor maintainers reminded me that it would
expose lots of linkable traffic from applications that didn't realize
that they were supposed to remove linkable identifiers and behaviors.
For example, browsers that didn't realize they were running over Tor
would continue to send cookies from non-Tor sessions, and they would
continue to be highly fingerprintable.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Intercept: NSA MONKEYROCKET: Cryptocurrency / AnonBrowser Service - Full Take Tracking Users, Trojan SW

2018-03-20 Thread Seth David Schoen
grarpamp writes:

> [Quoting The Intercept]
> financial privacy “is something that matters incredibly” to the
> Bitcoin community, and expects that “people who are privacy conscious
> will switch to privacy-oriented coins” after learning of the NSA’s
> work here.

Or, maybe people who are privacy conscious should already have done so
following several years of academic, journalistic, and commercial work
on this subject! :-(

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] catastrophe: ip-api.com sees me

2018-02-08 Thread Seth David Schoen
Dash Four writes:

> Roger Dingledine wrote:
> >Using any browser with Tor besides Tor Browser is usually a bad idea:
> >https://www.torproject.org/docs/faq#TBBOtherBrowser
> I disagree with that statement. It is certainly _not_ a bad idea, provided 
> you know what you are doing.

As the documentation says, there are a couple of different things that
can go awry here.

* Your non-Tor Browser can be vulnerable to a proxy bypass (because
  other browsers don't necessarily consider that a very serious
  problem).  E.g., an attacker can serve you some HTML that uses
  some kind of browser feature that goes directly over the Internet,
  not via Tor.

* Your non-Tor Browser can be vulnerable to various kinds of
  tracking and fingerprinting, because other browsers haven't done as
  much to mitigate that.  E.g., an attacker can use some kind of
  supercookie to recognize you across sessions, or serve some kind
  of Javascript that queries various system properties that produce a
  unique long-term fingerprint that Tor Browser might have prevented.

* Your non-Tor Browser can be inherently distinctive because very
  few people are using any given other configuration.  E.g., you might
  be the only person in the world currently using Tor with a particular
  browser version, OS, language, and browser window size (even if a
  site doesn't use elaborate or complex Javascript to find out about
  your system's properties).

Your particular setup has probably mitigated the first of these
effectively, but maybe not the other two.

Now, there are ways that the Tor Browser may also have failed to fully
mitigate each of these risks.  And there could be other benefits to
using a different browser in terms of adversaries who know of zero-day
vulnerabilities in Tor Browser that might not be present in other
browsers.  (Some critics have pointed out that more potential attackers
probably have zero-days against the current Tor Browser at a given
moment than against, say, the current Google Chrome; at least, they
typically wouldn't have to pay as much money to buy them.)  But you
probably can't mitigate the second two concerns above on your own, which
might always mean more trackability and less anonymity of a certain kind
when using another browser with Tor.

Also,

* If you use something other than Tor Browser, you can get confused
  about when you are or aren't using Tor, or accidentally enable or
  disable it in the middle of some other activity, leading to several
  kinds of contamination between Tor and non-Tor sessions.

Very sophisticated and disciplined users might not trip over this
particular issue, but it's a relatively high risk and a lot of people
using the old TorButton setup definitely ran into this kind of problem.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Privacy Pass from Cloudflare, and the CAPTCHA problem

2017-11-20 Thread Seth David Schoen
bob1983 writes:

> 3. Even if this protocol is integrated in Tor Browser, after clicking "New
> Identity", all local data will be erased. Considering this feature is 
> frequently
> used by Tor users, we still need to solve some CAPTCHAs.

If the protocol is sound here in its unlinkability property, the Tor
Browser should not need to erase the store of tokens.  I realize that
this may be a challenge architecturally and conceptually, but in the
design of this protocol, persistence of the tokens shouldn't compromise
Tor's anonymity goals.

(Although it does potentially reduce the anonymity set a bit by
partitioning users into those who have the extension and those who don't
have the extension, as well those who currently have tokens remaining
and those who are currently out of tokens.)

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] Proposed DV certificate issuance for next-generation onion services

2017-11-02 Thread Seth David Schoen
Coinciding with the Tor blog post today about next-generation onion
services, I sent a proposal to the CA/Browser Forum to amend the rules
to allow issuance of publicly-trusted certificates for use with TLS
services on next-generation onion addresses (with DV validation methods,
in addition to the currently-permitted EV methods -- thereby permitting
individuals as well as anonymous service operators to receive these
certificates).

https://cabforum.org/pipermail/public/2017-November/012451.html

Thanks to various people here for discussing the merits of this with me.
We'll see what the Forum's membership thinks of the idea!

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] noise traffic generator?

2017-10-06 Thread Seth David Schoen
Matej Kovacic writes:

> Hi,
> 
> there is some interesting project called Noiszy: https://noiszy.com/
> 
> It generates fake traffic. It is more "artists" project that real
> countermeasure, but I am thinking to implement something like this on my
> network with several machines inside.
> 
> However, the main problem is that Noiszy works too random, and is not
> "walking" in websites enough time and enough consistent to give an
> impression someone is really browsing something.

There have been a few projects in this space before, like Helen
Nissenbaum's TrackMeNot, and at least two others that I'm not thinking
of right away.

I agree with your concern that it's currently too easy for an adversary
to use statistics to learn if traffic is human activity or synthesized.
Another problem is that the sites that the traffic generator interacts
with might themselves get suspicious and start responding with CAPTCHAs
or something -- which would then also reduce the plausibility of the
traffic.

I also wonder if someone has studied higher-order statistics of online
activity, in the sense that engaging in one activity affects your
likelihood of engaging in another activity afterward (or concurrently).
For example, you might receive an e-mail or instant message asking you
to look at something on another site, and you might actually do that.
On the other hand, some sites are more distracting and less conducive
to multitasking than others.  For example, you probably wouldn't be
playing a real-time online game while composing an e-mail... but you
might play a turn-based game.

There are also kind of complicated probability distributions about events
that retain attention.  For instance, if you're doing something that
involves low-latency interactions with other people, it's only plausible
that you're actually doing that if the other people were also available
and interacting with you.  The probability that a given person continues
communicating with you declines over time, and is also related to time
zone and time of day.  But there's also a probability that someone else
starts interacting with you.

Some of these things will probably have to be studied in some depth in
order to have a hope of fooling really sophisticated adversaries with
synthesized online activity.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] How to find trust nodes?

2017-09-27 Thread Seth David Schoen
George writes:

> But ultimately, Tor's topography mitigates against one of the three
> nodes in your circuit being compromised. If the first hop is
> compromised, then they only know who you are, but not where your
> destination is. If the last hop is compromised, they only know where
> you're going, but not who you are (unless your providing clear text of
> personally identifying information).

A challenge is that there are threat models in which a considerable number
of Tor users may be exposed, at least for some of their circuits.

* If a single adversary runs several fast nodes that are popular and whose
  relationship to each other is undisclosed, a pretty high amount of traffic
  may select that adversary's nodes as entry and exit nodes for the same
  circuit.  The guard node design gives a relatively low probability of this
  happening to any individual user with respect to any individual
  adversary in any specific time period, but doesn't guarantee that it
  would be a particularly rare event for Tor users as a whole.

* If adversaries cooperate, they can get benefits equivalent to running many
  nodes even though each one only runs a few.

* If an adversary can monitor network activity and see both entry and exit
  points, for a given circuit, it can perform correlations even though
  it doesn't operate any nodes.  Or, an adversary that can monitor some
  networks can increase its chance of getting visibility of both ends of
  a connection by also operating some nodes, since some users whose entry
  or exit activity the adversary otherwise wouldn't have been able to
  monitor from network surveillance alone may sometimes randomly choose to
  use that adversary's nodes in one of these positions.

* An adversary that can monitor some kind of public or private online
  activity can perform coarse-grained timing correlation attacks between
  its own entry nodes (or parts of the Internet where it can see Tor
  node entry) and the online activity that it can see.  For example, if a
  user regularly uses Tor to participate in some kind of public forum,
  public chat, etc., the adversary could gather data about how entry
  traffic that it can see does or doesn't correlate with that participation.
  Or if an adversary can obtain logs about the use of a particular online
  service, even though those logs aren't available to the general public,
  it can also correlate that statistically with entry data that it has
  available for some other reason.

The "good news" is that a given Tor user is probably not very likely to
be vulnerable to many of these attacks from many adversaries when using
Tor infrequently or for brief periods.  Yet many of these attacks would
work at least some of the time against a pretty considerable amount of
Tor traffic.

I agree with your point that just having more random people run nodes
helps decrease the probability of success of several of these attacks.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] New OONI release: Test sites you care about!

2017-09-27 Thread Seth David Schoen
Arturo Filastò writes:

> That said, something to keep in mind, is that OONI Probe is not a privacy 
> tool, but rather a tool for investigations and as such poses some risks (as 
> we explain inside of our informed consent procedure).
> 
> We are not aware of any OONI Probe users having gotten into trouble for using 
> our tool, but we prefer to air always on the safe side be sure that they 
> understand very well the speculative risks that they could run into.

Yes, I guess I'm just a bit surprised that there aren't more known
cases of censors actively trying to interfere with OONI in some way --
especially since it's already led to published reports about specific
censorship events and practices in specific countries.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor users in US up by nearly 100,000 this month

2017-09-03 Thread Seth David Schoen
Roger Dingledine writes:

> Asking Cloudflare how many people are deciding to solve their captchas
> today is measuring a different thing -- if I try to load a news article,
> see a cloudflare captcha, and say "aw, fuck cloudflare, oh well" and
> move on, am I a bot?

I'm just figuring that you can get useful relative rather than absolute
metrics if you assume that people's tendency to do this is relatively
stable across time and across user populations.  So you don't know how
many of the non-solvers are bots, but you can say that the solvers are
up 10% this month or something, which perhaps then suggests that non-bot
Tor users are up about 10% this month.

This still wouldn't reveal whether 60% or 95% of the non-solvers are
bots.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor users in US up by nearly 100,000 this month

2017-09-01 Thread Seth David Schoen
Scfith Riseup writes:

> Nope.
>
> Indication that Tor in use uptick unfortunately could point to more
> bots collecting Tor, not necessarily people using Tor. Wish there was
> a way to differentiate bots from meat.

Amusingly, CloudFlare would probably be in a position to do so because
they present many Tor users with CAPTCHAs.  While this has annoyed Tor
users quite a bit, if we assume that

* old and new Tor users are about equally likely to attempt the CAPTCHA
* old and new Tor users are about equally likely to pass it
* old and new users visit a similar proportion of CloudFlare-hosted sites
  via Tor exits
* CAPTCHAs are relatively effective at preventing access by bots
* CloudFlare keeps logs that clearly identify total volumes of successful
  CAPTCHA completion from Tor exit nodes

then CloudFlare would have good, meaningful data about trends in human
use of Tor.  They wouldn't know the overall volume of human or bot use
of Tor, but they could tell pretty accurately when human use is up or
down and by what fraction.

One confounding factor would arise if the new users are significantly
more or less likely than old users to use onion services.

I'd be happy to ask CloudFlare if they'd be willing to share this data
(maybe in relative rather than absolute numeric terms, like "the number
of people successfully completing a CAPTCHA per day from a Tor exit
node on September 1, 2017 is x% of what it was on January 1, 2016").

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Neal Krawetz's abcission proposal, and Tor's reputation

2017-08-31 Thread Seth David Schoen
Paul Syverson writes:

> As the cryptographic design changes for next generation onion services
> are now being rolled out, that
> in-my-opinion-never-actually-well-grounded concern will go away. I
> cover at a high level, a design for onion altnames in "The Once and
> Future Onion" [1] that I think is consistent with the current CA/B
> Forum issues about onion addresses. It doesn't cover all desired
> cases, so I hope you are successful. But I think it covers a lot of
> the ground.
> 
> [1] https://www.nrl.navy.mil/itd/chacs/syverson-once-and-future-onion

Thanks, I guess that's Section 5 there.

Do you think there should perhaps be a new OID with semantics like "for
each identifier that is a subject of this certificate and that contains
'onion' as one DNS label, we performed both clearnet and onion site DV"
and so "you can feel free to access the .onion version of this site
while also believing that it's run by the same organization as the TLD"?
Presumably such an OID could be added by a CA without a new CA/B Forum
ballot because it's just asserting an additional check and not reducing
the CA's verification obligations.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Neal Krawetz's abcission proposal, and Tor's reputation

2017-08-30 Thread Seth David Schoen
Roger Dingledine writes:

> I think finding ways to tie onion addresses to normal ("insecure web")
> domains, when a service has both, is really important too. I'd like to
> live in a world where Let's Encrypt gives you an onion altname in your
> https cert by default, and spins up a Tor client by default to let users
> reach your webserver using whichever level of security they prefer.

Well, I'm still working on being able to write to the CA/B Forum about
this issue... hopefully we'll find out soon what that community is
thinking.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Motivations for certificate issues for onion services

2017-08-10 Thread Seth David Schoen
Dave Warren writes:

> I don't completely understand this, since outside the Tor world it's
> possible to acquire DV certificates using verification performed on
> unencrypted (HTTP) channels.
> 
> Wouldn't the same be possible for a .onion, simply requiring that the
> verification service act as a Tor client? This would be at least as good,
> given that Tor adds a bit of encryption.

I think Roger's reply to my message addresses reasons why I think this
is a good argument, and I'm in agreement with you.  However, with
next-generation onion services, it should no longer be necessary to have
any form of this argument.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] Motivations for certificate issues for onion services

2017-08-09 Thread Seth David Schoen
Hi folks,

For a long time, publicly-trusted certificate authorities were not
clearly permitted to issue certificates for .onion names.  However, RFC
7686 and a series of three CA/Browser Forum ballots sponsored by Digicert
have allowed issuance of EV certificates (where the legal identity of
the certificate requester is verified offline before the certificate is
issued).  This has allowed Digicert to issue a number of such certificates
to interested (extremely non-anonymous!) onion service operators.

https://crt.sh/?Identity=%25.onion

So far Digicert is the only browser-trusted CA to have taken advantage of
this policy.  Notably, it doesn't apply to certificate authorities that
only issue DV certificates, because nobody at the time found a consensus
about how to validate control over these domain names.  There was also
a long-standard concern about cryptographic strength mismatch in the
sense that the cryptography used by onion services was weaker than the
cryptography that's now used in TLS.  (I think this concern was misplaced,
but I believe it's served as one of the main rationales for distinguishing
EV from DV.)

So, there has been a suggestion that this issue might be revisted with
the next generation onion services because they have stronger
cryptographic primitives.  Apparently these have now been not only
implemented but actually demonstrated:

https://blog.torproject.org/blog/new-and-improved-onion-services-will-premiere-def-con-25

I'd like to prepare to raise this issue with the CA/Browser forum in
anticipation of a ballot there to have it be possible for DV certificates
to be issued to onion services.  So I wanted to ask two things here:

(1) What's the status of onion services looking like now?  I haven't
seen Roger's DEF CON talk.  (Was it recorded?)

(2) What reasons do people have for wanting certificates that cover
onion names?  I think I know of at least three or four reasons, but I'm
interested in creating a list that's as thorough as possible.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor's work

2017-06-18 Thread Seth David Schoen
Suhaib Mbarak writes:

> Dear Seth Schoen:
> 
> Thank you very much for your extremely appreciated answer:
> 
> It seems that you were the most person who got what I'm looking for.
> To be honest I'm doing my best to find away to figure out how to achieve my
> goal to show student how TOR works as I explained in my last email to you.
> 
> I'm using Shadow as network simulator it is running tor as a plugin
> attached to Shadow but I couldn't change the tor clients code to log the
> information which I need.
> 
> Can you please help me step by step (back and forth between us) to do
> something deliverable??

I'm glad that my answer was helpful to you, but I don't think I have
enough familiarity with the Tor code base to help you with the specific
things that you're looking for.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] tor-talk Digest, Vol 77, Issue 9

2017-06-08 Thread Seth David Schoen
Suhaib Mbarak writes:

> I'm a master student and doing some researches on TOR . I'm using shadow
> simulator; not real tor network; my goal is only to run an experiment and
> from the output of that experiment I can confess my students that Tor
> really : [...]

It seems to me that one useful possibility is to modify the Tor client so
that it outputs logs of the decisions it makes and the actions it takes,
as well as, maybe, the cryptographic secrets that it uses.  For example,
your modified Tor client could print out how it chose a path, and the
actions that it took to build the path, and the actual encryption keys
that it used in communicating with the nodes along the path.

You could then also use a packet sniffer (or some mechanism for packet
capture if your network is totally virtual) to examine the actual
traffic in your simulated network, and, for example, to decrypt it using
the keys that were logged by the modified client, showing exactly what
information can be seen by someone in possession of each secret key, and
conversely which keys are necessary in order to learn which information.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor source code

2017-06-08 Thread Seth David Schoen
By the way, there's an interesting new study

https://www.ieee-security.org/TC/SP2017/papers/84.pdf

that claims that many people believe communications security is "futile"
because of inaccurate mental models of cryptography, and strongly
endorse security through obscurity.

I've been thinking a lot about these results (it's worth reading the
paper) and one way that I've been trying to conceive of it is that the
research showed that many participants thought that the developer of a
security technology must, inherently, always know how to crack or
defeat that technology.  This might be true at a technical level if
encryption always worked like a substitution cipher, where there is no
secret key but knowledge of the details of the cipher is equivalent to
knowing how to crack it, or if public key cryptography didn't exist,
so that many-to-many communications required trusted authorities to
distribute key material.

Participants in that study did not tend to feel that encryption software
ought to be open source because they seemed to believe that the
developer of a security tool inherently, so to speak, knows the code
and can always use that knowledge to break users' security.  In this
model other motivated attackers will gradually also learn the secret
knowledge that they need to break the system, but disclosing technical
details of how it works would be an especially bad idea because it would
greatly speed up the process for the attackers.  (Then security through
obscurity is understood to be the only possible form of security.)

The study suggests that an important challenge for developers of security
systems may be finding a way to communicate how security need not depend
on obscurity, and also need not depend on trusting inventors of security
systems to keep secrets.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor source code

2017-06-08 Thread Seth David Schoen
Suhaib Mbarak writes:

> Dear all.
> 
> My question is to make sure wether tor source code is open and available
> for public or not?

Yes, it has always been since the beginning of the project.  Currently,
the code is available at

https://gitweb.torproject.org/tor.git

> In case it is open source and can be modified how it is secure?

Open source means that anyone is allowed to make their own changes (and
share those with the public if they want), but there is an official
version from the Tor Project which only official Tor maintainers can
change.  The official Tor maintainers receive suggestions from the public,
but they make the final decision about whether or not other people's
changes can become part of the official version of Tor.

For example, if you wanted to change something, you could make your own
modified version without anyone's permission, but it wouldn't be the
official version.  You would need to ask the maintainers to adopt your
changes if you wanted them to become part of the official version.

There is still an interesting question about whether people could somehow
trick the Tor maintainers into including a change that is actually
detrimental, even though it appears to be useful.  In many ways, the Tor
project relies on public scrutiny to confirm that changes that get
included in the official version are useful and don't introduce problems
or security holes.  There is a fairly broad consensus that this is a
useful way to work, yet I don't think that people are confident that all
of the risk has been mitigated, since there are also security research
projects that show that there are ways of intentionally creating bugs
that are subtle and carefully disguised as useful functionality.

So, there is still a need for ongoing research about how to learn to
detect (whether by human knowledge, by coding standards, by using
different languages or libraries, by creating new software tools, or
by something call formal methods where properties of code are proven) if
people are trying to disguise or hide a bug or vulnerability inside of a
useful contribution.

The Tor Project has actually thought about this issue a lot, if you're
very interested in it... there are probably other resources and
presentations that you could look at that further examine the issue.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] State of bad relays (March 2017)

2017-03-03 Thread Seth David Schoen
nusenu writes:

> that put users at risk because they potentially see traffic entering
> _and_ leaving the tor network (which breaks the assumption that not
> every relay in a circuit is operated by the same operator).

(strictly speaking, the assumption that no more than one relay in a
circuit is operated by the same operator)

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Exits: In Crossfire on the Front Lines

2017-01-04 Thread Seth David Schoen
grarpamp writes:

> [quoting movrcx]
> In today’s cyberwar, Tor exit nodes represent the front line of
> battle. At this location it is possible to directly observe attacks,
> to launch attacks, and to even gather intelligence. An alarming figure
> disclosed by The Intercept’s Micah Lee attributed 40% of the network
> addresses used in the Grizzly Steppe campaign are Tor exit nodes. And
> this is not a good thing.

This is a fairly different angle on what Micah originally wrote.

https://theintercept.com/2017/01/04/the-u-s-government-thinks-thousands-of-russian-hackers-are-reading-my-blog-they-arent/

(His article says that, while it's plausible that these attacks were
sponsored by the Russian government, the IP addresses involved don't
tend to prove that because many of them -- being Tor exit nodes --
could have been used by any attacker.)

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Will Quantum computing be the end of Tor and all Privacy?

2016-11-28 Thread Seth David Schoen
Seth David Schoen writes:

> Notably, Google has even experimentally deployed a PQ ciphersuite
> in Chrome (that uses elliptic-curve cryptography in parallel with
> Alkim et al.'s "new hope" algorithm).
> 
> https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html

Coincidentally, Adam Langley just announced today that this experiment
is ending (with fairly favorable results):

https://www.imperialviolet.org/2016/11/28/cecpq1.html

Well, we don't know how favorable they were against adversarial
cryptanalysis, but they were favorable operationally.  (If you're reading
this and you do happen to know how well CECPQ1 resists adversarial
cryptanalysis, please share!)

-- 
Seth Schoen  <sch...@eff.org>
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Will Quantum computing be the end of Tor and all Privacy?

2016-11-28 Thread Seth David Schoen
Flipchan writes:

> I dont think so, quantum 4times at fast so we just need to generate 4times as 
> strong keys the entropy will just be bigger, But as Long as we are not useing 
> like 56 bit des keys its okey

You're probably thinking of safety of symmetric encryption, where there
is a quadratic speedup from quantum computers.

https://en.wikipedia.org/wiki/Grover's_algorithm

The situation is a lot worse with public-key encryption, where there
is a much bigger speedup

https://en.wikipedia.org/wiki/Shor%27s_algorithm

So experts generally believe that we don't really need new symmetric
encryption algorithms to defend against quantum computers (things like
AES are OK), but we do need new public-key algorithms (things like RSA
are not OK).  This is discussed in the beginning of

https://www.pqcrypto.org/www.springer.com/cda/content/document/cda_downloaddocument/9783540887010-c1.pdf

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Will Quantum computing be the end of Tor and all Privacy?

2016-11-27 Thread Seth David Schoen
hi...@safe-mail.net writes:

> So, where does this put Tor, encryption and general privacy? Shouldn't we 
> start preparing ourselves for the inevitable privacy apocalypse?

People have been working on this for years, and they're making good
progress.

https://en.wikipedia.org/wiki/Post-quantum_cryptography

Notably, Google has even experimentally deployed a PQ ciphersuite
in Chrome (that uses elliptic-curve cryptography in parallel with
Alkim et al.'s "new hope" algorithm).

https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html

If this works well and research continues to support this approach,
it should be standardized as a ciphersuite in TLS.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] What is the different between Official TorBrowser and Browser4Tor?

2016-11-26 Thread Seth David Schoen
Jason Long writes:

> Hello.
> I found a version of Tor in "http://torbrowser.sourceforge.net/;, But what is 
> the different between it and official TorBrowser? Is it a trust version?

This is an unrelated project that seems to be trying to confuse people
by visually imitating the old design of the Tor Project web site (and
using "torbrowser" in the URL).

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Find Real IP via ISP.

2016-11-25 Thread Seth David Schoen
Jason Long writes:

> Are you kidding? Iranian relays are good in this scenario? Why?

Because they might be less likely to cooperate with ISPs in other
countries to track Tor traffic.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Please Remove Tor bridge and... from Censorship countries.

2016-11-07 Thread Seth David Schoen
Jason Long writes:

> To be honest, I guess that I must stop using Tor It is not secure.I can 
> remember that in torproject.org the Tor speaking about some peole that use 
> Tor. For example, reporters, Military soldiers and...But I guess all of them 
> are ads. Consider a soldier in a country that want send a secret letter to 
> his government and he want to use Tor but the country that he is in there can 
> sniff his traffic :( 

That soldier has a potential problem if the government is aggressively
monitoring Internet traffic, because they can look at the time that the
message was received and ask "who was using Tor in our country at that
time?".  This happened in 2013 when someone sent a bomb threat using
Tor on his university campus.  Apparently he was the only person using
Tor on campus at the time the threat was sent.

http://www.dailydot.com/crime/tor-harvard-bomb-suspect/

The ability to do this doesn't require the government to operate any of
the nodes and doesn't require them to be operated in the same country.
For instance, Harvard University was able to identify this person even
though he was using only Tor nodes that were outside of the university's
network.  (It might have been much harder if he had been using a bridge
that the university didn't know about, or if he had sent the threat
from somewhere outside of the campus network.)

If there are ways of sending the letter that introduce a delay, then it
might be harder for the government to identify the soldier because then
there is some amount of Tor use at a time that's not obviously related
to the sending of the letter.  There might still be a concern that the
amount of data that the soldier transmitted over the Tor network is
very similar to the size of the letter, which may be a unique profile.
(That's a concern for systems like SecureDrop because people upload
large documents with a unique size; the number of people who transmitted
that exact amount of information on a Tor connection in a particular
time frame will be very small.)

There's lots to think about and a good reminder that the Tor technology
isn't perfect.  But I wouldn't agree with the idea that there's no point
in using Tor.  Lots of people are getting an anonymity benefit from
using it all of the time.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Please Remove Tor bridge and... from Censorship countries.

2016-11-06 Thread Seth David Schoen
Jason Long writes:

> Not from ISP!! It is so bad because ISPs are under 
> governments control. If an ISP can see I use Tor then it is a good evidence 
> in censorship countries.You said " If a government is running the bridge, it 
> will know where the users are who are using that particular bridge.", In your 
> idea it is not silly? I mean was it and Tor must ban it.

My point is that people in other countries could still benefit from these
services, especially if they don't mind as much that the government of a
country where they don't live knows something about their Tor traffic.
For example, if I live in Germany, maybe I am more comfortable with my
Tor circuits going through Iran, compared to someone who lives in Iran
who is unhappy about that.  Both people might agree that the Iranian
government probably spies on the Tor network in a way they disagree
with, but the person who lives in Iran may see this as a very practical
important thing to worry about, while the perhaps who lives in Germany
may think it's not as practically important.  Or maybe someone living
in Argentina is trying to hide their location from a particular person,
but not from the government, and doesn't really mind if their data goes
through Tor nodes in their own country.

If you're using bridges to hide the fact that you use Tor at all, you
need some way to know if the particular bridges and technologies you
use can accomplish that goal.  That might include knowing the person
or organization who runs the bridge that you use.  If you use bridges
that are run by unknown people, you get a much greater risk that those
bridges are maliciously tracking your use of Tor, regardless of what
country they're physically located in.

I totally agree that surveillance by ISPs and governments is very serious
and very disturbing.  Tor's design is partly about letting people use
resources that are "somewhere else" so that perhaps they're not under
surveillance by the user's own government or ISP, or aren't all under
surveillance by the same people.  This will probably work less well
overall if the Tor developers try to single out particular countries as
extra-bad so that they can't participate in Tor at all.  That would mean
fewer countries overall participating in Tor, and an easier time for
people trying to do surveillance in the somewhat-less-bad countries.
And it would mean fewer choices for users about where to send their
traffic.

One thing that might be useful would be a way for Tor users to actively
pick what jurisdictions (or fiber optic cables or Internet exchange
points) they do or don't want their data to pass through, and have the
Tor client respect those preferences.  This is helpful both because
individual Tor users believe different things and because they have
different threat models.  I believe there's an old mechanism in the
torrc configuration file to avoid using nodes in particular countries,
but very few Tor users use this or understand how to use it.  Maybe it
could be made clearer and more convenient and integrated with the Tor
Browser interface in some way.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Timing attacks and fingerprinting users based of timestamps

2016-11-06 Thread Seth David Schoen
Flipchan writes:

> So i was thinking about timing attacks and simular attacks where time is a 
> Big factor when deanonymizing users . 
> and created a Little script that will generate a ipv4 address and send a get 
> request to that address 
> https://github.com/flipchan/Nohidy/blob/master/traffic_gen.py then delay x 
> amount of seconds and do it again. This will probelly make it harder for the 
> attacker to fingerprint the users output data due to the increased data flow 
> coming out from the server. 
> 
> So to protect against traffic timing attacks and simular would be to generate 
> More data.

This is called padding traffic and it's been studied a bit in relation
to systems like Tor.  Roger has often said that a conclusion of the
studies was that it's hard to get a lot of privacy benefit from most
padding schemes, but it might be good to know what the state of the
art is in padding attacks and defenses.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Please Remove Tor bridge and... from Censorship countries.

2016-11-06 Thread Seth David Schoen
Jason Long writes:

> You said the governments can see a user bandwidth usage and it is so bad 
> because they can understand a user use Tor for regular web surfing or use it 
> for upload files and...  
> You said governments can see users usages but not contents but how they can 
> find specific users if Tor hide my IP?!!

Tor hides your IP address from the sites you're communicating with,
but not from your own ISP (for example), or from the Tor bridge or guard
node that you use.

In the original design of Tor there was absolutely no attempt to hide
who is using Tor, only what they are doing with it.  One idea was that
lots of people should use Tor for lots of things, so that it will be
hard to guess why a particular person uses Tor.

In the case of bridges for anticensorship, there is also some attempt
to hide who is using Tor (especially because of the idea that using
Tor can be forbidden or blocked in certain countries).  If a particular
bridge technology is unblocked, maybe the government doesn't know how
to detect it yet, so maybe they don't know who the Tor users who use
that technology are.  If a government is running the bridge, it will
know where the users are who are using that particular bridge.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Please Remove Tor bridge and... from Censorship countries.

2016-11-06 Thread Seth David Schoen
Jason Long writes:

> Hello Tor Developers and administrator.The Tor goal is provide Secure web 
> surfing as free and Freedom but unfortunately some countries like Iran, 
> China, North Korea and... Launch Tor bridges for spying on users and sniff 
> their traffics and it is so bad and decrease Tor users and security. If Tor 
> Project goal is Freedom and Anti Censorship then it must ban all bridges and 
> Servers from those countries. Please consider it and do a serious job.

Tor's approach to this issue is generally to look for ever-greater
geographic diversity of servers.

The Tor design assumes that there could be monitoring of servers in a
particular network, but hopes that this won't be a big problem because
most organizations monitoring Tor nodes can only see a part of the
overall network.  In that case, they can hopefully only see a part of
the path that a particular user's traffic takes, so they may not know
where the user is and also whom the user is communicating with (though
they might know one or the other).

In this model, it's not necessarily bad to have nodes on networks that
are hostile -- because the people doing the monitoring get incomplete
information.  At the same time, having nodes in many places can decrease
how complete a picture any one network operator or government can get.
For example, suppose that the U.S. government, the Chinese government,
and the Iranian government are all trying to spy on Tor users whose
traffic passes through their territory, but the governments don't directly
cooperate with each other.  In that case, having a user use nodes in all
3 jurisdictions is probably great for anonymity because each jurisdiction
to some extent protects facts about the user's activity from the other
jurisdictions, and it's hard for anyone to put the whole picture together.

If people want to hide the fact that they're using Tor at all, and are
using bridges for that reason, they probably should not use bridges
inside their own country.  But those bridges could be useful to people
in other countries who aren't trying to hide from the same adversary.

If an exit node is unable to reach a lot of network resources because
of censorship on the network where it's located, it should be possible
to detect this through scanning and flag it as a BadExit so that clients
will avoid using it in that role.

There's still a problem when network operators pool their information or
when governments can monitor networks outside of their own territory.
This is a practical problem for path selection and also for assessing
how much privacy Tor can actually provide against a particular adversary.
For instance, if the U.K. government taps enough of the world's Internet
links, or trades data about Tor users with other governments, it might
be able to learn a lot about a high fraction of Tor users even if they
don't use nodes that are in the U.K.  That could be hard to fix without
adopting a different anonymity design or finding a way to prevent these
taps and exchanges of data.

People have been thinking about that kind of issue quite a bit, like in

https://www.nrl.navy.mil/itd/chacs/biblio/users-get-routed-traffic-correlation-tor-realistic-adversaries

and other research projects, and to my mind the news isn't necessarily
that good.  But the key point is that having nodes on an unfriendly
network isn't necessarily bad in itself unless that network actually
sees interesting data as a result (or actively disrupts traffic in a way
that doesn't get blacklisted from clients' path selection).  And that can
sometimes happen, but doesn't always have to happen, and people on other
networks can still get a potential privacy or anticensorship benefit in
the meantime.

Notice that this argument doesn't depend on saying that what governments
are doing is OK, or that they don't have ill will toward the Tor network
or particular Tor users.  It also doesn't prove that governments will
fail to monitor the network; there's a lot of uncertainty about how
effective governments' capabilities in this area are.

Finally, there's an issue about identifying which nodes are secretly
run by the same organizations (or secretly monitored by the same
organizations!) which fail to admit it.  This is a form of Sybil attack,
where one entity pretends to be many different entities.  If a government
set up many ostensibly unrelated nodes, and clients believed they were
actually unrelated, it would increase the chance that a given Tor user
used several of those nodes for the same circuit, decreasing anonymity.
Tor can probably do better about detecting this.  It's not certain that
blacklisting countries would help much with this, because we don't know
which governments are attempting this to what degrees, and because they
don't have to host their nodes on IP addresses in their own jurisdiction!
If the North Korean government wants to do this sort of attack, it can
pay to set up a bunch of servers in France and Germany, which users and

Re: [tor-talk] Tor and Google error / CAPTCHAs.

2016-10-03 Thread Seth David Schoen
Alec Muffett writes:

> To a first approximation I am in favour of maximising all of those, but
> practically I feel that that's a foolhardy proposition - simply, my Netflix
> viewing, or whatever, does not need to be anonymised.

I appreciate your approach to analyzing what Tor-like tools need to be
able to do, but I wanted to question this a little bit.

Some of us privacy advocates have felt that it's quite bad that
communications technologies generate location and association metadata
in the first place.  I've often said in interviews that it's a flaw in
the cell phone infrastructure that it generates location data about
its users, for example, and that it would be better to have a mobile
communications infrastructure where location anonymity was the default
for everybody all of the time.

In the Netflix case, when you use an account to sign into their service,
just like any other, you're creating evidence of where you were when
you were watching that movie, which is also the basis of other evidence
about who was with you or who knows whom (like if you watched Netflix
from someone else's house, or two people watched Netflix from the same
place, or one person watched Netflix and another person signed into a
different service).

I wouldn't want to concede that it's appropriate that all of that data
gets generated all of the time, even if you can't see any sensitivity to
it at a particular moment.  And if location privacy continues to happen
on a purely opt-in basis, it'll continue to draw more attention to people
who are using tools to protect it, and it'll continue to be hard for
people to anticipate when they're going to turn out to have needed it.
It seems that people often discover later on that they wish they had
taken precautions to protect some data that didn't seem significant at
the moment.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor-Retro' for OS/2 Warp 4.52 Latest Release (2001) ?

2016-05-31 Thread Seth David Schoen
NTPT writes:

> There is no motivation to make exploits and other stuff on rare OSses.. 

There's a certain circularity to this: if you use rare OSes because
attackers aren't interested in them and you convince lots of people that
this is a good strategy, attackers may then get more interested in them.
So it's at least not a strategy that can scale very well.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Could Tor be used for health informatics?

2016-05-30 Thread Seth David Schoen
Paul Templeton writes:

> Where Tor may fit...
> 
> The Tor network would provide the secure transport - each site would create 
> an onion address. Central servers would keep tab of address and public keys 
> for each site and practitioner.

I'm not convinced this is a good tradeoff for this application.  The
crypto in the current version of hidden services is weaker in several
respects than what you would get from an ordinary HTTPS connection.
These users probably don't need (or want?) location anonymity for either
side of the connection and may not appreciate the extra latency and
possible occasional reachability problems associated with the hidden
service connection.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] augmented browsing - "sed inside torbrowser"

2016-05-16 Thread Seth David Schoen
haaber writes:

> Hello,
> 
> I wonder if there are more interested people out there to include a
> "postprocessing" of the HTML code via  *sed*  type search & replace
> expressions. A tiny sed copy could be included in the brwoser and a
> domainbased list of expressions be given to sed that modifies the html
> code(s) according to personal tastes.

There is a nice existing and non-Tor Browser-specific tool that does
something along these lines:

https://en.wikipedia.org/wiki/Greasemonkey

It may be a bit more elaborate than what you were thinking of but it's
a nice tool that can handle a variety of use cases -- and should be
fully compatible with Tor Browser already.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Does Facebook Onion Work?

2016-03-09 Thread Seth David Schoen
Fkqqrr writes:

> Oskar Wendel  writes:
> 
> BTW, Does facebook has a onion version?

Probably one of the most famous onions, https://facebookcorewwwi.onion/.

See

https://lists.torproject.org/pipermail/tor-talk/2014-October/035421.html

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] .onion name gen

2016-03-04 Thread Seth David Schoen
Scfith Rise up writes:

> I'm pretty sure that the onion address is generated directly from the private 
> key, at least if you have every played around with scallion or eschalot. So 
> what you just wrote doesn't apply in that way. But again, I could be wrong. 

Mirimir's reference at

https://trac.torproject.org/projects/tor/wiki/doc/HiddenServiceNames

shows that they are truncated SHA-1 hashes, 80 bits in length, of "the
DER-encoded ASN.1 public key" of "an RSA-1024 keypair".

So you have the space of public keys (indeed, it's considerably less than
1024 bits if you want to actually be able to use it as a keypair) and the
space of 80-bit truncated hashes, and the former is dramatically larger
than the latter.  So over the entire space of keys, collisions are not
just possible but are required and even extremely frequent.  On the other
hand, they're so difficult to find that nobody knows a single example!

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] .onion name gen

2016-03-04 Thread Seth David Schoen
Scfith Rise up writes:

> It _would_ be the same private key. Good luck with generating 1.2 septillion 
> permutations (16^32). 

This would be true if the public key were used directly as the onion name
(which might be possible in certain elliptic curve systems because keys
are so small).

But in this case, the onion name is calculated from a hash of the public
key, and the size of the hash is much smaller than the size of the
underlying pubkey (80 bits vs. 1024 bits).  The pigeonhole principle
requires that many, many different pubkeys must have the same hash --
on average, about 2⁹⁴⁴ pubkeys would have the same hash.  When you
get a perfect collision from scallion, after doing that 2⁸⁰ work
(analogous to about 11 days of entire work of the Bitcoin network --
which you can think of as surprisingly much or surprisingly little work),
you're still astronomically unlikely to have the same private key!

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Lets Encrypt compared to self-signed certs

2016-02-29 Thread Seth David Schoen
ban...@openmailbox.org writes:

> Hi David. Thanks for chiming in. Please add a feature for pinning at
> the key level as IMO it provides the best protection.

We don't have any tools for pinning at all but you can read people's
tips about it on the Let's Encrypt community forum.

> Will the logs provide users/site owners with a way to independently
> check if coercion has happened?

The logs obviously don't have metadata about whether certificates are a
result of coercion, but if you are the site owner and you see a
certificate in the log that you didn't ask for, you have evidence that
there's been a problem, while if you are a user and you see a
certificate on the site that isn't in the log, you have evidence that
there's been a different kind of problem.

> Would systems like Cothority help Lets Encrypt users notice cert
> issuance inconsistencies even under compelled assistance? This
> project has the advantage of letting Tor clients spot anomalies in
> the Tor consensus documents should any of the DirAuths be
> compromised and it can be used for CAs too:
> 
> https://github.com/dedis/cothority

I'll be happy to take a look at that.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Lets Encrypt compared to self-signed certs

2016-02-29 Thread Seth David Schoen
ban...@openmailbox.org writes:

> How secure is Lets Encrypt compared to a pinned self signed cert?
> Can Lets Encrypt be subverted by NSLs?

You can use pinning with Let's Encrypt certs too.  The default client
behavior changes the subject key on every renewal, but I can add a
feature to keep the old key if you want to pin at the key level.

We don't know how large the risk of legally-compelled misissuance is,
but we have lots of lawyers who would be excited to fight very hard
against it.  I think that makes us a less attractive target than other
CAs that might not find it as objectionable or have as many lawyers
standing by to challenge it.

Remember that (without CA-level pinning) users are always at risk
from misissuance by any CA that they trust, not just the CA that
you specifically chose to use.  For example, google.com was attacked
(successfully at first) with misissued certs from DigiNotar even though
Google had no relationship with DigiNotar at all.

We also publish all of the certs that we issue in Certificate
Transparency.  You can watch the CT logs for your domain or other certs
that you care about.  If you ever see a cert in CT for your domain
that you didn't request, please make a big deal out of it.  Likewise,
if you ever see a valid cert in the wild from Let's Encrypt that doesn't
appear in the CT logs, please make a very big deal out of it.  At some
point it should become possible to get browsers to require inclusion CT
proofs for certs from Let's Encrypt, though we don't have the tools in
place for this yet.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Bridges and Exits together

2016-02-20 Thread Seth David Schoen
Anthony Papillion writes:

> I already run an exit node and would like to also run a bridge. Is it
> acceptable to run a bridge and an exit on the same machine and on the
> same instance of Tor? If so, are there any security issues I should be
> aware of in doing so? Any special precautions or measures I should
> take to protect my users?

Your bridge will become less useful for censorship circumvention because
its IP address (as an exit) will get published in the public directory
of Tor nodes and so automatically added to blacklists of Tor-related
addresses.  The censorship-circumvention benefit of bridges, ideally,
comes in because censors don't know that their traffic is related to Tor.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tracking blocker

2016-02-19 Thread Seth David Schoen
Paul A. Crable writes:

>   A NYT article yesterday discussed tracking blockers and
>   recommended Disconnect from among four candidates for
>   Intel-architecture computers.  Disconnect would be installed
>   as an add-on to Firefox.  You have a standing recommendation
>   that we not install add-ons to the TOR browser.  Would that
>   prohibition apply to the tracking blocker Disconnect?

The recommendation not to install add-ons is because they will make
your Tor browser more different from others and so potentially more
recognizable to sites you visit -- because they could look at their
logs and say "oh, that's the Tor Browser user who was also using
Disconnect!".  If you didn't use Disconnect, they wouldn't necessarily
have a straightforward way to distinguish you from any other Tor Browser
users who also visited the site, or to speculate about whether a Tor
Browser user who visited site A was also the same Tor Browser user who
visited site B.

The Tor Browser design already provides quite strong tracker protection
compared to a run-of-the-mill desktop web browser because of all of the
ways that it tries not to keep state between sessions, tries not to let
sites find out many things about your computer or browser, and tries not
to let one site see what you've done on another site.

https://www.torproject.org/projects/torbrowser/design/

If you can point out a specific way that Disconnect protects your privacy
that Tor Browser currently doesn't, or if the Disconnect developers
can think of one, it might be constructive to bring it up with the Tor
Browser developers, because they might be willing to consider adding it
as a standard feature for all users.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] PGP and Signed Messages,

2016-02-19 Thread Seth David Schoen
Seth David Schoen writes:

> People also don't necessarily check it in practice.  Someone made fake
> keys for all of the attendees of a particular keysigning party in
> 2010 (including me); I've gotten unreadable encrypted messages from
> over a dozen PGP users as a result, because they believed the fake key
> was real or because software auto-downloaded it for them without
> checking the signatures.

This happened once again today, shortly after I wrote this message!
The person who made the mistake was a cryptography expert who has done
research in this area.  So I fear the web of trust isn't holding up
very well under strain, at least in terms of common user practices with
popular PGP clients.

-- 
Seth Schoen  <sch...@eff.org>
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] PGP and Signed Messages,

2016-02-19 Thread Seth David Schoen
Cain Ungothep writes:

> This is not just the "traditional" answer, it's the only proper answer.

There are other ideas out there too, like CONIKS.

https://eprint.iacr.org/2014/1004.pdf

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] PGP and Signed Messages,

2016-02-19 Thread Seth David Schoen
Nathaniel Suchy writes:

> I've noticed a lot of users of Tor use PGP. With it you can encrypt or sign
> a message. However how do we know a key is real? What would stop me from
> creating a new key pair and uploading it to the key servers? And from there
> spoofing identity?

The traditional answer, which amazingly nobody has mentioned in this
thread, is called the PGP web of trust.

https://en.wikipedia.org/wiki/Web_of_trust

In the original conception of PGP, people were supposed to sign other
people's keys, asserting that they had checked that those keys were
genuine and belonged to the people they purported to.

This is used most successfully by the Debian project for authenticating
its developers, all of whom have had to meet other developers in person
and get their keys signed.  Debian people and others still practice
keysigning parties.

https://en.wikipedia.org/wiki/Key_signing_party

This method has scaling problems, transitive-trust problems (it's possible
that some people in your extended social network don't understand the
purpose of verifying keys, or even actively want to subvert the system),
and the problem that it reveals publicly who knows or has met whom.  For
example, after a keysigning party, if the signatures are uploaded to
key servers, there is public cryptographic evidence that all of those
people were together at the same time.

So there is a lot of concern that the web of trust hasn't lived up to
the expectations people had for it at the time of PGP's creation.

People also don't necessarily check it in practice.  Someone made fake
keys for all of the attendees of a particular keysigning party in
2010 (including me); I've gotten unreadable encrypted messages from
over a dozen PGP users as a result, because they believed the fake key
was real or because software auto-downloaded it for them without
checking the signatures.

If you did try to check the signatures but didn't already have some
genuine key as a point of reference, there's also this problem:

https://evil32.com/

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Not able to download Tor to droid]

2016-02-05 Thread Seth David Schoen
libertyinpe...@ruggedinbox.com writes:

> The url Tor was downloaded from is guardianproject.info/apps/orbot
> direct download(.apk).  I tried doing so again after your response.  The
> tablet operating system indicated it had downloaded, -- but it had not. If
> it is still on the droid hard drive, how do I find it? Is there a search
> function as in Windows?

If you're using the default Android browser, try pressing the Menu button
and then looking for "Downloads" in the menu.  This will show a list of
files that have been downloaded using the browser; selecting an APK file
there will cause it to be installed (if you've already set your settings
to allow non-Play Store app installs).

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] onion routing MITM

2016-01-26 Thread Seth David Schoen
populationsteam...@tutanota.com writes:

> I'm new to tor, trying to understand some stuff.
> 
> I understand the .onion TLD is not an officially recognized TLD, so it's not 
> resolved by normal DNS servers. The FAQ seems to say that tor itself resolves 
> these, not to an IP address, but to a hidden site somehow.
> 
> When I look at thehiddenwiki.org, I see a bunch of .onion sites, with random 
> looking names. Why is this? What if someone at thehiddenwiki.org registered a 
> new .onion site (for example http://somerandomletters.onion), which then 
> relayed traffic to duck-duck-go (http://3g2upl4pq6kufc4m.onion)? 
> Thehiddenwiki could give me the link http://somerandomletters.org, and of 
> course I would never know the difference between that and 
> http://3g2upl4pq6kufc4m.onion

The hidden service name isn't chosen directly by the hidden service
operator and you can't just make one up and start using it.  Instead,
it's derived from the hidden service's cryptographic public key.
Tor checks that the public key matches when you're connecting to the
hidden service, so someone can't simply substitute their own service
without knowing the corresponding private key.

In effect, the crypto key is used as a name (or identifier), which
provides an intrinsic cryptographic way to know whether you're talking to
someone who has the right to use that name (or is properly referred to by
it), assuming hidden service operators can keep their private keys secret.

Somewhat confusingly, people do manage to make their hidden services
start with strings of their choice, but they do this by generating
enormous numbers of different keys over and over again until they get
one that they like.  Despite that, it takes an exponentially-increasing
number of attempts for each additional character of the onion name that
you want to control, so even if Facebook can get one that starts with
"facebook" (as they did), we don't tend to think anyone* has the time
or computational resources to be able to choose the entire onion name,
for example to choose one that matches an existing one controlled by
somebody else.  For instance, even if I had generated an onion name
beginning "3g2upl4", it would take me about 32 times as much work to get
one beginning "3g2upl4p", 1024 times as much work to get one beginning
"3g2upl4pq", 32768 times as much work to get one beginning "3g2upl4pq6",
and overall 35184372088832 times as much to get one that exactly matches
DuckDuckGo's onion name.

> Am I supposed to get the duckduckgo URL from a trusted friend of mine, and 
> then 
> always keep it?

Yes, or from DuckDuckGo's regular site.

https://duck.co/help/privacy/no-tracking


* The Bitcoin network is doing quite a bit more computation, in total,
  than this per year, so it's actually conceivable that someone with a
  very large amount of money to spend on custom hardware could do this.
  So the next generation of Tor hidden services will use a longer
  onion name.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] onion routing MITM

2016-01-26 Thread Seth David Schoen
populationsteam...@tutanota.com writes:

> The question is: From a user perspective, http://3g2upl4pq6kufc4m.onion just 
> looks like random characters. (And in fact, if it's a hash of a public key, 
> which was originally randomly generated, then indeed these *are* random 
> characters). You obviously don't want to memorize a domain name such as this, 
> and as a human, you're very bad at recognizing the difference between 
> http://3g2upl4pq6kufc4m.onion and http://xmh57jrzrnw6insl.onion

In the Zooko's Triangle sense, Tor hidden service names are secure and
decentralized, but not human-meaningful (or human-memorable).

https://en.wikipedia.org/wiki/Zooko's_triangle

That is to say that Tor hasn't tried to solve the problem you mention
at all.  The answer seems to be that you're supposed to get the names
somewhere else and store them in something other than your human memory.
This is in common with a few other designs that use representations
of crypto keys directly (for example, PGP and Bitcoin) and where
someone could try to trick you into using a key that isn't really the
right one.  In the PGP example, someone has uploaded a fake key with my
name and e-mail address to the keyservers (several years ago), which has
already fooled a number of people because they couldn't or didn't readily
distinguish my real key from the fake key, both of which are just numbers
that someone on the Internet has claimed are relevant to contacting me.

If you have ideas for making this more convenient, I'm sure they would
be welcome.  Aaron Swartz proposed in 2011 that blockchains and related
systems could solve it by letting people publicly announce claims to
(human-memorable) names in an append-only log.

http://www.aaronsw.com/weblog/squarezooko

There are some implementations of related ideas, like okTurtles, but
none is extremely widely used yet.

> What prevents a person from registering a new .onion site, such as 
> http://laobeqkdrj7bz9pq.onion and then relaying all its traffic to  
> http://3g2upl4pq6kufc4m.onion, and trying to get people to believe that 
> *they* are actually the duckduckgo .onion site?

Indeed, Juha Nurmi described earlier today that people are doing exactly
that right now, probably with some success.

https://lists.torproject.org/pipermail/tor-talk/2016-January/040038.html

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2016-01-22 Thread Seth David Schoen
Lucas Teixeira writes:

> Are there references for "real life" usage of traffic confirmation?

I've mentioned the Jeremy Hammond and Eldo Kim cases, which can be seen
as "good enough" coarse-grained correlation.  I think there are others
if we look for them.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2016-01-01 Thread Seth David Schoen
Oskar Wendel writes:

> Seth David Schoen <sch...@eff.org>:
> 
> > As I said in my previous message, I don't think this is the case because
> > the correlation just requires seeing the two endpoints of the connection,
> > even without knowing the complete path.
> 
> Is it possible to be sure that one of these connecting clients is in fact 
> a client and not just intermediate relay in the circuit?

As a guard node (or someone observing a guard node) trying to locate the
operator of a hidden service, you can use the IP address of the inbound
connection and the Tor directory to see if it's another Tor node or not.
I guess the hidden service operator could use a bridge to create more
ambiguity about what's happening; I don't know for sure if a guard node
has a way to distinguish an inbound connection from a bridge from an
inbound connection directly from a client.

-- 
Seth Schoen  <sch...@eff.org>
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2015-12-31 Thread Seth David Schoen
Aeris writes:

> > Does it apply also to traffic going from/to hidden services? How safe are
> > users of hidden services when compared to users that browse clearnet with
> > Tor?
> 
> Correlation is possible but very more difficult, because 3 nodes for client 
> to 
> rendez-vous points, then 3 others for rendez-vous to HS.

As I said in my previous message, I don't think this is the case because
the correlation just requires seeing the two endpoints of the connection,
even without knowing the complete path.  This is even possible with
a hidden service because the server that provides the hidden service
also uses an entry guard of its own, which is the "endpoint" for traffic
correlation purposes when a user is contacting the hidden service, despite
the much longer (and so harder to observe) path within the Tor network.

The lack of security improvement from longer path lengths is described in

https://www.torproject.org/docs/faq.html.en#ChoosePathLength

> Strength of HS is also to not have clearnet output, even if the « exit » node 
> of one of the circuits id compromised, an attacker can’t access clear data. 
> Not the case on the standard case, when compromised exit node have access to 
> all the user data if HTTPS is not used.

That's definitely an improvement, although there's an issue in the long
run that the crypto in HTTPS is getting better faster than the crypto
in Tor's hidden services implementation. :-)

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2015-12-31 Thread Seth David Schoen
Oskar Wendel writes:

> Does it apply also to traffic going from/to hidden services? How safe are 
> users of hidden services when compared to users that browse clearnet with 
> Tor?

The hidden service users can be identified as users of the individual
services using the same sybil approach: if a user uses a particular
guard node and the hidden service uses a guard node controlled (or
observed) by the same entity, that entity can correlate the traffic
between the two.  I don't know how easy it is to infer right at that
moment that the communication is between a user and a hidden service
rather than between two users intermediated by something else.  However,
the attacker can potentially realize that it's a guard node for some
hidden service because a particular user connects to the guard node
all the time, has a high traffic volume, and for some hidden services,
uploads more than it downloads on average (which is the reverse of the
usual pattern for a Tor Browser user).  (That inference might be even
easier if the hidden service's guard node just notices whether that user
tends to upload a little data followed by downloading a lot of data,
or download a little data followed by uploading a lot of data, since
web browsers usually do the former and web servers usually do the latter.)

The guard node has a conceptually harder task in figuring out _which_
hidden service it's a guard node for.  There has been a lot of research
that touches on this issue and it's clearly not as easy for hidden
services to conceal their identities from their guard nodes as it
should be, especially if the guard nodes actively experiment on the
hidden service.  One example that shows why this is a difficult problem
is that if you control a guard node and you know about the existence of a
particular hidden service, you can connect to the hidden service yourself
and see if that results in any traffic coming out of your guard node.
You can also deliberately shut down clearnet traffic to and from your
guard node for a few seconds at a time at randomly-chosen moments and
see if that results in outages of availability for the hidden services
at the same moments.

I think some of these ideas are developed in published papers and I'm
sorry for not thinking of which papers at the moment.  You can see that
this can make the situation of the hidden service somewhat precarious.

See also

https://blog.torproject.org/blog/hidden-services-need-some-love

There might be some more hope in the future from high-latency services
(based on examples like Pond), or, based on what some crypto folks have
been telling me, from software obfuscation (!!).

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2015-12-31 Thread Seth David Schoen
Alexandre Guillioud writes:

> " That's definitely an improvement, although there's an issue in the long
> run that the crypto in HTTPS is getting better faster than the crypto
> in Tor's hidden services implementation. :-) "
> 
> I don't understand why you are saying that this is an 'issue'.
> If one of the crypto tech is getting better, the tor stack will be improved
> in its whole, isn't it ?

It's also a question of practical deployment: it should be improved
eventually with new Tor protocol versions, but I don't believe that it
has been yet (although I'd love for the Tor developers to correct me on
this point).

> Moreover, i've read that some 'ssl authoritie' is now allowing registration
> of .onion domains.

Yes, Digicert is offering them.

https://blog.digicert.com/ordering-a-onion-certificate-from-digicert/

But as you can see from their page, they only offer EV certificates,
which involve verifying the legal identity of an organization.  So the
certificates aren't available for onion sites that are operated by
individuals or that are operated by anonymous people or organizations.
Right now, probably most onion sites wouldn't be able to get a certificate
for their sites because of these restrictions.  (I'm grateful to Digicert
for their work on this -- the restrictions aren't their fault!)

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hello I have a few question about tor network

2015-12-29 Thread Seth David Schoen
권현준 writes:

> I subscribe tor-talk
>  
> Hello I'm Korean student studying security
> First of all sorry for my bad english. 
> I have a few question about tor network
>  
> 1. Tor network is 100% security network? that can not be hacked by other 
> cracker?
>  
> 2. If not, How can cracker attack tor network that tor can't prevent?

Hi!

I would suggest looking at Tom Ritter's overview presentation about Tor.
It is very detailed.  Hopefully the technical level will be appropriate
for you and the English content will be clear.

https://ritter.vg/p/tor-v1.6.pdf

He gives a number of discussions of limitations of Tor and possible
attacks.  There are also attacks that try to deanonymize users (finding
the true IP address of a user responsible for a circuit) or hidden
services (finding the true IP address of a server responsible for a
hidden service) under various conditions and circumstances.  This is an
ongoing area of research for academic studies, and also probably for
governments that want to identify Tor users.

Particular research on Tor has been written about on the Tor blog at

https://blog.torproject.org/category/tags/research

and also collected as part of the anonymity bibliography at

http://www.freehaven.net/anonbib/

Of course only some of the later papers there relate to Tor, because Tor
didn't even exist at the time that the anonymity field first began! :-)

There are a lot of attacks that are effective at least some of the time.
If you look at the original Tor design paper, they assume that someone
who is watching the place where a user enters the network (the first
node in the chain, today called entry guard) and the place where the
user's communications exit the network (the exit node) will be able to
break the user's anonymity by noticing that the amount and timing of data
going in on one side matches the amount and timing of data coming out on
the other side.  This is pretty serious and has been used to deanonymize
people in real life.  Some of the research papers propose ways of trying
to deanonymize users or hidden services under more restrictive
conditions, where the attacker controls or monitors less of the network,
or controls or monitors something other than entry and exit traffic.

One issue about this is understanding what counts as a successful
attack.  I'm still concerned that Tor users may not understand the issue
presented in the original design about how someone watching both sides
can recognize them!

Another kind of attack that hasn't been discussed very much is the
idea of hacking the individual servers that provide the Tor network,
either by exploiting software vulnerabilities in the Tor server itself
or by exploiting vulnerabilities in other software that these servers
run like Linux or OpenSSH.  This sort of attack could be quite serious
if it affected many different Tor nodes at the same time, because the
nodes could be reprogrammed by the attacker to start logging data and to
cooperate to reveal users' activities.  There's no specific publicly-known
vulnerability that can be used to do this right now; an attacker would
need to find or buy knowledge of a new one (although there might be some
portion of Tor nodes that are slow to apply server software updates,
which might still be vulnerable to older software bugs or might have
stayed vulnerable for a longer period of time).

It's important to understand the difference between hidden services and
exit traffic when reading the academic research, because a lot of
research focuses on deanonymizing hidden services, which poses different
challenges from deanonymizing regular users.  Attacks against hidden
services can be quite serious, but they only represent a small fraction
of the overall use of the Tor system.

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Ordering a .onion EV certificate from Digitcert

2015-12-15 Thread Seth David Schoen
Fabio Pietrosanti (naif) - lists writes:

> Hello,
> 
> we asked on Twitter to Digicert to provide a quick guide on how order an
> x509v3 certificate for TLS for a .onion, they've just published this
> small guide:
> https://blog.digicert.com/ordering-a-onion-certificate-from-digicert/
> 
> Hopefully other CA will follow and at a certain point letsencrypt too.

Let's Encrypt doesn't issue EV, so the CA/B Forum needs to agree that
DV certs can be issued for .onion names too (some people have suggested
that they would be called something other than "DV", but be analogous to
DV, based on proof of possession of a cryptographic key from which the
name is derived).

-- 
Seth Schoen  
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] I am getting European nodes only?

2015-08-25 Thread Seth David Schoen
forc...@safe-mail.net writes:

 Hello!
 
 Using the last release of Tor Browser, I am a bit surprised: Circuits are 
 made ONLY with European nodes! I changed identity a few times, asked New 
 Tot circuit for this site, every time there are only European nodes!
 
 I cannot believe that Northern and Southern America, Asia, Russia, haven't 
 any Tor node???
 
 Any suggestion to stop this and use other nodes than only European ones?
 
 (I didn't modify anything in the config)

The European nodes include some of the fastest nodes in the world, and
the probability of choosing a node in a path is related to how fast the
node is (you're more likely to use nodes that have more capacity than
nodes that have less capacity).

Depending on how you update, you might be using a new set of guard
nodes.  The guard nodes are chosen randomly when you first run Tor (or
a fresh copy that's not using the old configuration).  The guard nodes
affect which exit nodes your Tor client will choose because the guard
node can't also be used as an exit node.  So one possibility is that
if you have several fast European nodes as guard nodes, you'll tend to
choose other nodes as exits (relatively more likely outside of Europe),
while if you have several non-European nodes as guard nodes, you'll
tend to choose other nodes as exits (relatively more likely within
Europe, especially since that's where the fastest exits are).

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Letsencrypt and Tor Hidden Services

2015-08-19 Thread Seth David Schoen
Fabio Pietrosanti (naif) - lists writes:

 Hello,
 
 does anyone had looked into the upcoming Letsencrypt if it would also
 works fine with Tor Hidden Services and/or if there's some
 complexity/issues to be managed?
 
 As it would/could be interesting if Tor itself would support directly
 letsencrypt to load TLS certificate on TorHS.

Hi, I'm working on the Let's Encrypt project.  A difficulty to contend
with is that the certificate industry doesn't want certs to be issued
for domain names in the long term unless the names are official in
some way -- to ensure that they have an unambiguous meaning worldwide.
The theoretical risk is that someone might use a name like .onion in
another way, for example by trying to register it as a DNS TLD through
ICANN.  In that case, users might be confused because they meant to use
a name in one context but it had a different meaning that they didn't
know about in a different context.

Right now, the industry allows .onion certs temporarily, but only EV
certs, not DV certs (the kind that Let's Encrypt is going to issue),
and the approval to issue them under the current compromise is going
to expire.

It's seemed like the efforts at IETF to reserve specific peer-to-peer
names would be an important step in making it possible for CAs to issue
certs for these names permanently.  These efforts appeared to get somewhat
bogged down at the last IETF meeting.

https://gnunet.org/ietf93dnsop

(I'm hoping to write something on the EFF site about this issue, which
may have kind of far-reaching consequences.)

Anyway, I would encourage anyone who wants to work on this issue to get
in touch with Christian Grothoff, the lead author of the P2P Names draft,
and ask what the status is and how to help out.

Theoretically the Tor Browser could come up with a different optional
mechanism for ensuring the integrity of TLS connections to hidden services
(based on the idea that virtually everyone who tries to use the hidden
services is using the Tor Browser code).  I don't know whether the Tor
Browser developers currently think this is a worthwhile path.  I can
think of arguments against it -- in particular, the next generation hidden
services design will provide much better cryptographic security than the
current HS mechanism does, so maybe it should just be a higher priority
to get that rolled out, rather than trying to make up new mechanisms to
help people use TLS on hidden services.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Letsencrypt and Tor Hidden Services

2015-08-19 Thread Seth David Schoen
elrippo writes:

 Hy,
 i don't think letsencrypt will work on a HS because letsencrypt checks [1] if 
 the domain you type in, is registered.
 So for example on a clearnet IP which has a registered domain at mydomain.com 
 called myserver.tld, letsencrypt makes a DNS check for this clearnet IP and 
 gets the awnser, that this clearnet IP has a registeres domain called 
 myserver.tld on mydomain.com.
 
 How should letsencrypt do this on a HS?

If the CA/Browser Forum agreed that it was proper to do this, we could
create a special case for requests that include a .onion name to use
a different (non-DNS) resolution mechanism, recognizing that DNS is
not the only name resolution protocol on the Internet, as Christian
Grothoff put it.

I can't promise that Let's Encrypt would do this, but I think we would
be interested in the possibility.

In a way, the special-casing is what makes some folks in the CA/Browser
Forum nervous right now: if there's no official notion of the meaning
of some names, how can CAs know which names should use which resolution
mechanisms?  (For example, maybe some CAs have heard that they should
treat .onion specially, but others haven't.)  If they're unsure which
mechanisms to use, how can they know that the interpretation they give
to the names will be the same as end-users' interpretation?

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Letsencrypt and Tor Hidden Services

2015-08-19 Thread Seth David Schoen
Alec Muffett writes:

 Pardon me replying to two at once...

Thanks for all the helpful clarifications, Alec.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Letsencrypt and Tor Hidden Services

2015-08-19 Thread Seth David Schoen
Flipchan writes:

 Im wondering , have anyone got letsencrypt to work with a .onion site? Or is 
 it jus clearnet

For the reasons described elsewhere in this thread, it's definitely
just clearnet for the foreseeable future.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why is my message reject at tor-announce-ow...@lists.torproject.org ?

2015-08-12 Thread Seth David Schoen
Qaz writes:

 Hi there,
 
 Yeah the title pretty much says it. How do I go about this?

tor-announce isn't a discussion list and the public isn't allowed to
send messages to it.  The place where you can have public discussions
is tor-talk -- this list right here.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] General question regarding tor, ssl and .onion.

2015-08-08 Thread Seth David Schoen
MaQ writes:

 Also, while it was said that .onion encryption was of lower standard,
 wouldn't a high degree of privacy and randomness still be assured,
 except for maybe alphabet agencies and more nefarious types out there
 specifically targeting a subject or .onion addresses in general, and
 some serious work and resources would have to go into pinpointing and
 breaking said encryption?

I think it's reasonable to guess that cryptographic attacks would
be extremely expensive, so most prospective attackers today wouldn't
try them.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] General question regarding tor, ssl and .onion.

2015-08-08 Thread Seth David Schoen
Jeremy Rand writes:

 It's theoretically possible to use naming systems like Namecoin to
 specify TLS fingerprints for connections to Tor hidden services, which
 would eliminate the need for a CA.  I'm hoping to have a proof of
 concept of such functionality soon.

Is there a way to prevent an attacker from simply claiming the same
identifier in Namecoin before the actual hidden service operator does?

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] General question regarding tor, ssl and .onion.

2015-08-07 Thread Seth David Schoen
MaQ writes:

 Hello,
 
  I'm curious, I'm developing an app whereas sharing/collaboration
 can be done by localhost through tor and .onion address between pairs or
 multiples. When I use standard http there seems to not be any problems
 connecting different computers, different IPs, etc. and interacting, but
 when attempting to do it under https there isn't any connection. Https
 is definitely functioning with original hosts.
 
  My question is, since things are already going through tor with
 .onion connections and things encrypted anyway, is not using ssl really
 presenting any sort of serious compromise on anonymity? Wouldn't it be
 sort of like encrypting the encryption?

There is an ongoing discussion about how seriously one needs HTTPS with
a .onion address.  There is already end-to-end encryption built into the
Tor hidden service design, so communications with hidden services (even
using an unencrypted application-layer protocol like HTTP) are already
encrypted.

A problem is that the encryption for the current generation of hidden
services is below-par, technically, in comparison to modern HTTPS in
browsers -- it uses less modern cryptographic primitives and shorter
keylengths than would be recommended for HTTPS today.  This will change
eventually with future updates to the hidden service protocol, but right
now there would be incremental cryptographic benefit from connecting to
a hidden service via HTTPS.  But the encryption from HTTPS in this case
serves the same purpose as the hidden service encryption, so you're indeed
encrypting the encryption when you use it.

Unfortunately, it's hard to do today because certificate authorities
are reluctant to issue certs for .onion names; the CA/Browser Forum
has allowed them to do so temporarily, but only EV certificates can
be issued, which cost money, take time, and sacrifice anonymity of the
hidden service operator.

The best-known example of a hidden service that managed to navigate the
process successfully is

https://facebookcorewwwi.onion/

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] tor not running

2015-07-28 Thread Seth David Schoen
Bill Cunningham writes:

 #3 and on I did not know. Never usesd Keys. But I have the gp44win know. I 
 will let you know the results. After having imported the keychain If that's 
 the correct wording. How does this download site work for others and not me? 
 I am showing my ignorance I know, but I don't know why.

Most users don't use GPG to verify their downloads -- probably much
fewer than 1%.  If the download succeeds without interference, it isn't
technically necessary to verify it before using it.  It's a security
precaution.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] HORNET onion routing design

2015-07-24 Thread Seth David Schoen
str4d writes:

 * No replay detection - packet replay is ignored within the lifetime
 of a session. They suggest that adversaries would be deterred by the
 risk of being detected by volunteers/organizations/ASs, but the
 detection process is going to add additional processing time and
 therefore compromise throughput (c/f I2P uses a bloom filter to detect
 packet replays, and this is the primary limiting factor on
 participating throughput).

If the remote peer has to be actively involved in the onion routing
process, couldn't it detect replays rather than having the routers do it?
Or is the replay problem a problem of wasting network resources rather
than fooling the peer into thinking a communication was repeated?

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] HORNET onion routing design

2015-07-22 Thread Seth David Schoen
Has anybody looked at the new HORNET system?

http://arxiv.org/abs/1507.05724v1

It's a new onion routing design that seems to call for participation
by clients, servers, and network-layer routers; in exchange it claims
extremely good performance and scalability results.

I think it also calls for the use of network-layer features that aren't
present in today's Internet, so it might be hard to get a practical
deployment up and running at the moment.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] pdf with tor

2015-07-03 Thread Seth David Schoen
mtsio writes:

 If you to Preferences-Applications-Portable Document Format there is
 the option 'Preview in Tor Browser' that opens the PDF without opening
 an external application. What's the problem with that?

There are two kinds of risks that lead to the suggestion not to view
documents like PDFs inside your Tor Browser (or even not on the same
machine) -- exploits and IP address leaks.

The first risk is that sometimes there are software bugs in application
and viewer software that would allow someone who knew about the bugs
to take over your computer by constructing an invalid input file that
exploits the bug and then getting you to render the file.  So in that
case, someone could, for example, make an invalid PDF that exploits a
bug in the PDF renderer in your browser, and get you to view it somehow,
and then take over the browser.

The other is that many formats can cause software to make Internet
requests (for example, it's possible to embed image links in a Word
document so that a Word viewer will go and download those images).
Here, the concern is that if the software makes some kind of network
request when displaying the document, whoever is on the other end may
see that request coming directly over the Internet -- not via Tor --
and connect the request with your Tor activity.

So, some cautious Tor users advise copying all downloaded files onto
a different computer that's not connected to the Internet, or at least
inside of a virtual machine with no direct Internet access, and viewing
them there.

I don't know of specific cases in which people have deliberately used
these approaches to identify anonymous Tor users, but it's something
that's been discussed, and there _is_ a high rate of malware and tracking
links hidden inside e-mail attachments.  I liked the anecdote (which
I've seen in a few places) that Tibetan Buddhists who've received a lot
of malware are now practicing a new non-attachment principle.

https://www.yahoo.com/tech/hit-by-cyberattacks-tibetan-monks-learn-to-be-wary-of-102361885314.html

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Is this still valid?

2015-06-25 Thread Seth David Schoen
Seth David Schoen writes:

 If you read the original Tor design paper from 2004, censorship
 circumvention was actually not an intended application at that time:
 
 https://svn.torproject.org/svn/projects/design-paper/tor-design.pdf
 
 (Tor does not try to conceal who is connected to the network.)

The connection to censorship circumvention is that, on a censored
network, people are normally not allowed to connect to censorship
circumvention services (that the network operator knows about).  So if
you allow the network operator to easily know who is connecting to the
service -- as the 2004 version of Tor always did -- they can block it
immediately (as several governments did when they noticed Tor was
becoming popular in their countries).

Now that Tor also has censorship circumvention as a goal, there are
several methods it can use to try to disguise the fact that a particular
person is connected to the Tor network.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Is this still valid?

2015-06-25 Thread Seth David Schoen
U.R.Being.Watched writes:

 http://www.deseret-tech.com/journal/psa-tor-exposes-all-traffic-by-design-do-not-use-it-for-normal-web-browsing/

There are some mistakes in the article -- for example the notion that
Tor was built for a specific purpose, which was the circumvention of
restrictive firewalls like the Great Firewall of China.

If you read the original Tor design paper from 2004, censorship
circumvention was actually not an intended application at that time:

https://svn.torproject.org/svn/projects/design-paper/tor-design.pdf

(Tor does not try to conceal who is connected to the network.)

That has subsequently changed, the project adopted anticensorship uses
as an additional goal, and nowadays Tor does sometimes try to conceal
who is connected to the network, when they ask it to.  (Sometimes this
succeeds against a particular network operator, and sometimes not.)

But the original design goal was privacy in a particular sense, and
not censorship circumvention.

My colleagues and I made an interactive diagram a few years ago to try
to explain the same concern that this article presents.

https://www.eff.org/pages/tor-and-https

One part of it is that if you use Tor without additional crypto protection
to your destination (like HTTPS), a different set of people can eavesdrop
on you than if you didn't use Tor at all.  That's definitely still
true and is always a basic part of Tor's design.  You might think those
people are better or worse as eavesdroppers than the nearby potential
eavesdroppers.  The faraway eavesdroppers might be more organized and
malicious about it, but they also might start out not knowing who you are.
Whereas the nearby eavesdroppers might physically see you, or have issued
you an ID card, or have your credit card.

As we thought when we made that diagram, probably the best solution for
this is more and better HTTPS.  At some point (which may already be in the
past), it might even be a good idea for Tor Browser to refuse to connect
to non-HTTPS sites by default, although that might be a difficult policy
to explain to users who don't understand exactly what HTTPS is and how
it protects them, and just see that Tor Browser stops being able to use
some sites that Internet Explorer can work with.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] a question about ip addresses

2015-05-11 Thread Seth David Schoen
Heigrade writes:

 Hello,
 
 I am new to TOR and networking in general and had a question about ip
 addresses that TOR connects to.
 
 My question is this:
 
 After analyzing tcpdump data of TOR traffic, I've noticed that TOR
 always connects to the same ip address, even after restarts, whereas I
 had expected it to connect to different addresses across restarts. Is
 this address my ISP?

It's probably the entry guard that Tor chose for you.

https://www.torproject.org/docs/faq#EntryGuards

This is meant to reduce the chance that you'll choose an entry node and
an exit node secretly controlled or monitored by the same entity.

Some more technical details are in

https://blog.torproject.org/blog/improving-tors-anonymity-changing-guard-parameters

and probably other places.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] What is being detected to alert upon?

2015-04-30 Thread Seth David Schoen
Frederick Zierold writes:

 Hi,
 
 I am very curious how a vendor is detecting Tor Project traffic.
 
 My questions is what are they seeing to alert upon?  I have asked them,
 but I was told that is in the special sauce.
 
 Is the connection from the users computer to the bridge encrypted?
 
 Thank you for your insight.

Are they detecting non-public bridge traffic, or only normal entry
guards?

Detection and obfuscation is kind of a big topic that's been around for
some years, so there are a lot of possibilities.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] SIGAINT email service targeted by 70 bad exit nodes

2015-04-22 Thread Seth David Schoen
Roger Dingledine writes:

  I know we could SSL sigaint.org, but if it is a state-actor they could just
  use one of their CAs and mill a key.
 
 This is not great logic. You're running a website without SSL, even though
 you know people are attacking you? Shouldn't your users be hassling you
 to give them better options? :)
 
 As you say, SSL is not perfect, but it does raise the bar a lot. That
 seems like the obvious next step for making your website safer for
 your users.

What's more, you can conceivably detect the bad CAs through your own
scans or tests (if your scans can find widespread BadExits, they could
equally find widespread bad CAs whose certs are fraudulently presented
by those same BadExits).  You could also use HPKP pinning with the
report-uri mechanism to have clients tell you when they encounter fake
keys, although it's not clear that you can get a lot of benefit from
that in the default Tor Browser.

People are _very_ interested in knowing about compromised CAs.  So I
encourage people not to just assume that they're numerous and not bother
to use tools to detect them. :-)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] New Tor project idea for internet comments

2015-03-04 Thread Seth David Schoen
Lee Malek writes:

 Hi, I am new here.
 
 I have an idea for a tor sub-project that would serve our purpose (fighting 
 censorship) perfectly.
 
 This would be a different version of tor - a sort of sub-tor... and a browser 
 plugin.
 
 Everyone that installs this version of tor would be forced to run a relay - 
 but only for comments - no images, etc.
 
 The browser plugin would connect to the tor app and scan the webpage the 
 person is on. The plugin would display on a drop down comments people have 
 made using the tor comments system. They can of couse make comments of their 
 own.
 
 I think this is a must for our purpose. So many news website block comments 
 they dont like these days.

This is a major change from the existing approach of Tor and the Tor
developers.

First, the Tor project has only focused on preventing censorship
by networks and network operators, not by web sites.  The
censorship-resistance approach of Tor has been that your ISP shouldn't
be able to control whom you can communicate with, as opposed to that
web sites shouldn't be able to control who can post there or what they
can post.

Although the Tor Project has been very interested in ways to encourage
sites not to block anonymous users, there's never been an effort to
force the sites to accept anonymous users, or to conceal the fact that
someone is using Tor on the exit side.  In fact, the Tor Project has
specifically rejected the idea of doing that:

https://www.torproject.org/docs/faq.html.en#HideExits

(If people want to block us [on the exit side], we believe that they
should be allowed to do so.)

Second, Tor has never tried to force people to route other people's
traffic or to hide the fact that this is happening.  Instead, there
are a lot of cautions given to people who are considering operating
exit relays.  In your proposal, all of the users would be acting as
exits and routing (some) traffic to the public Internet.  That would
tend to put unsuspecting users at risk because they'd start to be the
subject of abuse complaints, including on their home Internet connections.
(In some designs, people could also deliberately target specific people
they don't like by posting threats through those people's connections.)
That would also probably make running Tor a lot less appealing to some
users because they wouldn't be given the choice about whether to provide
exits for other people's traffic.

Third, the distinction between comments and other kinds of traffic is
one that requires a huge amount of programming to enforce, and that can
probably only be enforced if users aren't using HTTPS to connect to the
sites.  The Tor Project and larger Tor community have been trying very
hard to get HTTPS deployed everywhere specifically so that Tor exit
nodes _won't_ be able to spy on or examine what Tor users are doing.  If
progress continues to be made on that front, the Tor exits will be less
and less in a position to make the distinction that you suggest between
comments and other stuff.

(It might be possible to extend the Tor protocol to have comment posting
be a special kind of exit, where the user explicitly entrusts the text of
the comment to the exit node, which then makes its own HTTPS connection to
the site and posts the comment.  But that would be a lot of engineering
work and would entail a new arms race with the web site operators, who
would be able to update the HTML code of their sites frequently to stop
Tor exit nodes from being able to recognize where and how to post the
comments.  So that's a lot of effort for a kind of blocking resistance
that Tor developers don't necessarily support philosophically and that
would be challenging to sustain over time.)

Fourth, there are some other technical problems with having everyone be
a relay.

https://www.torproject.org/docs/faq.html.en#EverybodyARelay

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor Browser Bundle with Chromium

2015-02-19 Thread Seth David Schoen
Luis writes:

 What are the reasons that makes building a Tor Browser using Chromium
 not such a good idea? I recall reading somewhere that while making a Tor
 Browser with a Chromium base would have its benefits due to Chromium's
 superior security model (i.e. sandboxing), there are serious privacy
 issues that would have to be solved to make that possible.
 My question is what are those issues? What is preventing someone from
 digging out all the Google integration and possible privacy-endangering
 features and making a Tor Browser Bundle out of it?

https://trac.torproject.org/projects/tor/wiki/doc/ImportantGoogleChromeBugs

I think that list is kept relatively up-to-date.

More generally, there are a lot of customizations in Tor Browser to turn
off or alter Firefox features that might identify a user (by making one
Tor user's browser look recognizably different from others) or might
bypass the proxy (causing the browser to send non-Torified traffic over
the Internet).

The Tor Project hasn't received a lot of help from the Chromium developers
on changes that would be important for making these customizations --
but with or without that help, they would be a lot of work in their own
right, just as they were a lot of work on the Firefox side.

You can read about some of the customizations in the Tor Browser design
document at

https://www.torproject.org/projects/torbrowser/design/

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Confidant Mail

2015-02-03 Thread Seth David Schoen
Mike Ingle writes:

 As far as HTTPS:
 The NSA has the ability to get into Amazon EC2 and mess with files
 too, no doubt.  And they have a variety of compromised HTTPS CA certs
 they could use to MITM.  If they wanted to do that they could, HTTPS
 or no. If they did it on a large scale, they would likely get caught,
 so they would only do such things if they were after a specific high
 value target. Hopefully you are not on their short list.

You can help mitigate each of these attacks by using HTTPS together with
HPKP to cause browsers to reject attack certs.  Anyway, you shouldn't
only think of one intelligence agency as a threat when distributing
privacy software.  Governments in any country where you may have users
might be interested in introducing malware into the versions downloaded
by some or all users in that country.  If manual signature checking is
rare -- as it probably will be -- then using HTTPS can be an important
step toward addressing that thread.  Maybe the actual attacks against
the integrity of your software distribution won't come from NSA, but
rather from some other government -- and maybe they _won't_ be able to
mount a successful attack against HTTPS certificate verification.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Confidant Mail

2015-02-03 Thread Seth David Schoen
Andrew Roffey writes:

 michael ball:
  On *Tue Feb 3, Mike Ingle wrote:*
  I don't have HTTPS because there is nothing secret on the site, and
  because I don't place much trust in it
  
  i may be mistaken that it is kinda stupid not to use HTTPS on a 
  website with downloads, as documents released by Ed Snowden show that
  the NSA has the capability of injecting malicious software into 
  active EXE file downloads in realtime.
 
 Then GnuPG signatures would perhaps be more appropriate in this instance?

The Tor Project itself has found that users often don't verify GPG
signatures on binaries (I think Mike Perry quoted some statistics about
how often the Tor Browser binary had been downloaded in comparison to
the .asc signature file -- it was orders of magnitude less often).  That
suggests to me that HTTPS should be used for software distribution
authenticity even when there's a signature available; the importance of
this only diminishes if the signature will be verified automatically
before installation (like in some package managers).  That's usually
not the case for first-time installations of software downloaded from the
web.

(I don't think the Tor Project has studied _why_ the users didn't verify
the signatures -- there are tons of possible reasons.  But it's clear
that most didn't, because the .asc file is so rarely downloaded.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] TOR issues

2015-01-05 Thread Seth David Schoen
Hollow Quincy writes:

 Dear TOR community,
 
 I spend some time to understand how TOR works. I still cannot
 understand some design assumptions. Could you please help me to
 understand some issues ?

I think some of your questions are based on misunderstanding the
difference between circuits that exit to public Internet services, and
circuits that terminate at Tor hidden services.  These are two separate
Tor features, and each circuit (that eventually reaches some service)
terminates in one way or the other but not both at the same time.

 1) Who store the mapping Onion_URL to real IP ? How exit node know
 where to send request ?

Exit nodes aren't used for hidden services at all.  Onion URLs are only
used to refer to hidden services, which communicate entirely within the
Tor network and don't exit.  Most uses of Tor use exit nodes to reach
public services on the ordinary Internet, instead of using onion URLs.

The hidden service directory mapping is performed by the hidden service
directory. :-)

 2) How to become Exit Node ?
 I understand that everyone can become normal node. If I become exit
 node even for some requests I can find mapping Onion_URL to real IP.
 Than IP of the page is not secret any more.

Everyone can become an exit node by declaring a non-empty exit policy.

That does allow them to monitor user communications and see where the
users are connecting.  In Tor's design this is not considered bad,
because the _identity_ of those particular users should still be hidden
(although it's potentially bad in some threat models, like when the
same adversary operates, or monitors, both the entry and exit points of
a particular user simultaneously).

Exit nodes (or at least their exit service!) are not used in any way
for contacting hidden services.  Hidden services and hidden service
users communicate entirely within the Tor network.  (Hidden services
themselves build Tor circuits in order to talk to their users.)

 3) How the communication is encrypted between nodes ?
 RSA encryption is not resistant for Man In The Middle attack. (that's
 why when I connect to new SSH server I need to add public key of the
 server to trusted list).
 When I use TOR my request goes to Node1 and than to Node2. How can I
 establish save connection with Node2, when Node1 is between us ?

Each Tor relay has its own public key which it declares when registering
with the Tor directories.  The Tor directories confirm that they have
the same view of the relays on the network, and the relays' public key,
through the consensus mechanism.

That means that the Tor directories are something like certificate
authorities or PKI for the regular Tor relays.  You have to trust the
consensus of the directories to give you the correct public keys for the
relays you plan to use, so that no relay (or ISP) can perform an
undetected man-in-the-middle attack.

https://www.torproject.org/docs/faq#KeyManagement

 4) Is there a single point of failure ?
 There need to be one central place where all IPs of TOR nodes are
 stored, so when I run my TOR bundle I go to this place and read node
 list and send requests using it. So if this place is down (for example
 because DDOS attract) new users will not be able to use TOR network.
 They will not find any TOR node.

The directory authorities, for some purposes, might be the single point
of failure you're looking for.  Since they have some redundancy, you
might not call them a single point of failure, but they could be seen
collectively as a point of failure because they need to be operating and
reachable in order for users to be able to learn how to connect to the
Tor network.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] All I Want For X-mas: TorPhone

2014-12-26 Thread Seth David Schoen
spencer...@openmailbox.org writes:

 Awesome!
 
 Though a tablet could work, I am more for a more pocket-sized mobile
 device. Also, Seth, thanks for the more in-depth concern regarding
 the WiFi MAC address and guard nodes, however, though I am all for
 people knowing how their devices work and why, the details of that
 kind of stuff is a bit over my head, even if I know what they are.

Hi Spencer,

The MAC address, at least, is a very important issue if you actually
want users to have location privacy with the device.  One of the most
important ways that governments and companies track physical locations
today is by recognizing individual devices as they connect to networks
(or, with some versions of some technologies, when the devices announce
themselves while searching for networks).  If the device itself has a
recognizable physical address that a network operator or just someone
listening with an antenna can notice, that is a tracking mechanism --
and not a theoretical tracking mechanism but one that's been reduced to
practice by advertisers, hotspot operators, and governments.

Depending on what kind of privacy you're looking for, using Tor in this
scenario might not help much, because other people can still tell where
you are (at least a particular device!), and, depending on the scope of
the trackers' view of things, may be able to go on to make a connection
between your device using Tor today over here and your device using
Tor next week over there.  In that case, the users of such devices
don't get the level of blending-into-a-crowd they might expect.

One privacy property you might want as a user of such a device is that
when you get online from a particular network, other people on that
network don't know it's you, but just see that some non-specific user of
the TorPhone is now on the network.  Without solving the MAC address
issue, and possibly some other related issues, you won't get that
property, even if the device is totally great in other ways.

The guard nodes historically may have constituted a similar problem
(oh, it's the Tor user who likes to go through nodes x, y, and z, not
the other Tor user who likes to go through w, x, and y, or the other
other Tor user who likes to go through p, q, and x).

A more general point is that someone who's trying to track you may use
_any_ available observable thing about you, your devices, your behavior,
and so on.  That's why really making users less distinguishable calls
for a lot of careful thinking and a lot of hard work, like in

https://www.torproject.org/projects/torbrowser/design/#fingerprinting-linkability

If you're talking about making a whole device like a phone, a lot of
that process has to be repeated, over and over again, to have a hope of
getting really strong privacy properties.  (Some people trying to make
Tor-centric operating systems like Whonix and Tails have definitely been
thinking about these problems at the operating system level, but they're
currently targeting laptops rather than phones.  And yes, they do worry
about the wifi MAC address!)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] All I Want For X-mas: TorPhone

2014-12-25 Thread Seth David Schoen
spencer...@openmailbox.org writes:

 Ideally it would run an open OS tied to an open organization and
 come with nothing installed on it except for a mobile version of
 TorBrowser. The best example I can think of now is a forked version
 of Android with Orweb/bot installed.  Other applications could be
 installed at the discretion of the human, like F-Droid and whatnot,
 presuming they meet the security ethics of the network.

There might already be a tablet out there somewhere that's suitable
for conversion to meet some of these suggestions (since there have been
plenty of them with no GSM interface at all).  One thing to investigate
is whether the wifi MAC address can be changed and how persistent the
changes are.

I'm also wondering if some of the Tor developers could give an update
on the issue about identifying people from their guard node selection
as they roam from one network to another.  Was that a motivation for
the decision to reduce the number of guard nodes, and has that change
happened yet?  Does someone have an estimate of the anonymity set size
if you notice that a mobile Tor user is using a particular guard node?

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Anonbib November papers without papers

2014-12-22 Thread Seth David Schoen
Sebastian G. bastik.tor writes:

 Anonbib's header states Selected Papers in Anonymity and I have no
 clue who selects them.

Historically I thought mostly Roger Dingledine and Nick Mathewson; it
looks like there have been a number of other contributions over the
years, though!

https://gitweb.torproject.org/anonbib.git/log/

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] CA signed SSL bad for censorship resistance?

2014-12-12 Thread Seth David Schoen
Miles Richardson writes:

 Has there been any research into the effect that CA signed SSL certs
 on .onion services have on the ability of Tor to circumvent censorship
 authorities? Is it possible there could be some leakage to the certificate
 authority that could be picked up by an ISP?

There's definitely a privacy issue about some sites because some
browsers may contact the CA's OCSP responder (mentioning which cert
they've just encountered).

https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol

The Tor Browser design document currently says

   We have verified that these settings and patches properly proxy HTTPS,
   OCSP, HTTP, FTP, gopher (now defunct), DNS, SafeBrowsing Queries,
   all JavaScript activity, including HTML5 audio and video objects,
   addon updates, wifi geolocation queries, searchbox queries, XPCOM
   addon HTTPS/HTTP activity, WebSockets, and live bookmark updates. We
   have also verified that IPv6 connections are not attempted, through
   the proxy or otherwise (Tor does not yet support IPv6). We have also
   verified that external protocol helpers, such as smb urls and other
   custom protocol handlers are all blocked.

So, when OCSP queries to the CA happen, they should also be sent over Tor.

Sites can help reduce the incidence of OCSP queries by implementing OCSP
stapling:

https://en.wikipedia.org/wiki/OCSP_stapling

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Hidden Services vs Onion services

2014-11-12 Thread Seth David Schoen
Nathan Freitas writes:

 On Wed, Nov 12, 2014, at 11:38 PM, Virgil Griffith wrote:
  I'll start trying onion service and just see if it catches on.
 
 Since these things are mostly used for websites, why not call them
 onion sites or onionsites? 
 
 Typical users don't talk about web services, they talk about web sites
 or pages. Perhaps they say online service but that usually means an
 ISP or something larger than just a site, imo.
 
 Turn your website into an onionsite
 Access the onionsite in the same way you access a website

It could be technically consistent to say both hidden services and
onion sites -- you could say that onion sites are web sites that are
served as hidden services.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Bitcoin over Tor isn’t a good idea (Alex Biryukov / Ivan Pustogarov story)

2014-10-30 Thread Seth David Schoen
Gregory Maxwell writes:

 On Mon, Oct 27, 2014 at 11:19 PM, Seth David Schoen sch...@eff.org wrote:
  First, the security of hidden services among other things relies on the
  difficulty of an 80-bit partial hash collision; even without any new
  mathematical insight, that isn't regarded by NIST as an adequate hash
 
 So?  80 bits is superior to the zero bits of running over the open internet?
 
 (You should be imagining me wagging my finger at you for falling into
 the trap of seeming to advance not using cryptographic security at all
 since it's potentially not perfect)

I meant this only as a response to the previous poster's remark that

  Hidden services are end-to-end encrypted so the risk of MITM between
  nodes does not exist.

I think the risk of MITM between nodes is extremely small.  But 80-bit
partial preimages are still a disturbingly small safety margin today.
It's kind of comparing apples to kumquats, but the current Bitcoin network
as a whole has a hashrate that (if it were instead testing SHA1 hashes
of RSA-1024 keys) could find such a partial preimage in 25 days, on
average.

While replicating the hashing power of the entire Bitcoin network is
a truly staggering cost (to say nothing of the associated power bill),
that isn't where I'd like to see the order of magnitude comparisons.

 Sure, though thats a one time transfer common to all Bitcoin users.
 Which the user may have already had most of previously, or obtained
 from some other source.
 
 At worst, that traffic has just identified you as someone who has
 started up a Bitcoin node.

Well, here, I was responding to the previous poster's claim that nobody
listening your wire can have a slight clue that you use bitcoin.

 Bitcoin core intentionally obscures the timing of its transaction
 relaying and batches with other transactions flowing through. It could
 do much better, the existing behavior was designed before we had good
 tor integration and so didn't work as hard at traffic analysis
 resistance as it could have.

Cool, that sounds like a great area for an enterprising researcher
to investigate.

 In some senses Bitcoin transaction propagation would be a near ideal
 application for a high latency privacy overlay on tor, since they're
 small, relatively uniform in size, high value, frequent... and already
 pretty private and so are unlikely to gather nuisance complaints like
 email remailers do.

Do you have a sense of how it would affect payees' concerns about
double-spending attacks if the latency of transaction propagation were
increased?  Is the idea that privacy-enhancing latency additions are
only meant for cases where receivers are already waiting for multiple
confirmations?

 (New client software comes with foreknowledge of the work in the real
 network, so you cannot even provide a replacement alternative history
 without doing enormous amounts of computation, e.g. 2^82 sha256
 operations currently to replicate the history).

That's a clever idea.

 As above, at least the 'trusted' operator has considerable costs to
 attack you... This is arguably a much stronger security model than
 using tor in the first place due to tor's complete reliance on
 directory authorities, for all you know you're being given a cooked
 copy of the directory and are only selecting among compromised tor
 nodes. This is one of the reasons that a some amount of work has gone
 into supporting multi-stack network configurations in bitcoin, so that
 you can have peers on each of several separate transports.

Tor's directory decentralization needs a lot of work, but the most
practical sybil attack in Tor is just the original -- making a lot of
compromised nodes and hoping a user will repeatedly pick them.  It
still seems pretty clear that this can work against many randomly
selected Tor users at comparatively low cost (I'd agree that that's
cheaper than attacking randomly-selected Bitcoin-over-Tor users with
Bitcoin-related attacks, for the reasons you mention).  On the other
hand, I don't think we have a clear path for a would-be Tor directory /
consensus subverter to follow.

 Normally when used with tor bitcoin nodes will use both HS and non-HS
 peers, and if non HS peers are available it will not use more than 4
 non-HS peers.
 
 However, because of the way tor's exit selection works the non-HS
 peers usually end up entirely connecting through a single exit, which
 is pretty pessimal indeed. We'd certainly like more control of that,
 but the ability to create hidden services over the control port would
 be a higher priority IMO... since right now it's almost pointless to
 improve robustness to HS sybils when so few people go through the
 considerable extra configuration to enable running a hidden service.

I wonder if anybody is working on that on the Tor end (or whether it
has any unexpected security consequences).  I guess it would mean
that someone who can compromise even a heavily sandboxed Tor Browser
would become able to use it as a hidden

Re: [tor-talk] Bitcoin over Tor isn’t a good idea (Alex Biryukov / Ivan Pustogarov story)

2014-10-27 Thread Seth David Schoen
s7r writes:

 All use Bitcoin default port 8333. These servers are up all the time
 and very fast.
 
 Hidden services are end-to-end encrypted so the risk of MITM between
 nodes does not exist. Also, if you run bitcoin in such a way with
 onlynet=tor enabled in config, nobody listening your wire can have a
 slight clue that you use bitcoin.

I don't mean to disparage the contribution of people who are running
Bitcoin hidden service nodes.  I think that's a very useful
contribution.

I do want to question three things about the benefits of doing so.

First, the security of hidden services among other things relies on the
difficulty of an 80-bit partial hash collision; even without any new
mathematical insight, that isn't regarded by NIST as an adequate hash
length for use past 2010.  (There has been some mathematical insight about
attacking SHA-1, which Tor hidden service names use, although I don't
remember whether any of it is known to be useful for generating partial
preimages.)  Tor hidden service encryption doesn't consistently use crypto
primitives that are as strong as current recommendations, though I think
they matched recommendations when the Tor hidden service protocol was
first invented.  That means that the transport encryption between a hidden
service user and the hidden service operator is not as trustworthy in
some ways as a modern TLS implementation would be.

Second, a passive attacker might be able to distinguish Bitcoin from other
protocols running over Tor by pure traffic analysis methods.  If a new
user were downloading the entire blockchain from scratch, there would
be a very characteristic and predictable amount of data that that user
downloads over Tor (namely, the current size of the entire blockchain --
23394 megabytes as of today).

Not many files are exactly that size, so it's a fairly strong guess that
that's what the user was downloading.  Even submitting new transactions
over hidden services might not be very similar to, say, web browsing,
which is a more typical use of Tor.  The amount of data sent when
submitting transactions is comparatively tiny, while blockchain updates
are comparatively large but aren't necessarily synchronized to occur
immediately after transaction submissions.  So maybe there's a distinctive
statistical signature observable from the way that the Bitcoin client
submits transactions over Tor.  It would at least be worth studying
whether this is so (especially because, if it is, someone who observes
a particular Tor user apparently submitting a transaction could try to
correlate that transaction with new transactions that the hidden services
first appeared to become aware of right around the same time).

Third, to take a simpler version of the attacks proposed in the new
paper, someone who _only_ uses Bitcoin peers that are all run by
TheCthulhu is vulnerable to double-spending attacks, and even more
devious attacks, by TheCthulhu.  (You might say that TheCthulhu is
very trustworthy and would never attack users, but that does at least
undermine the decentralization typically claimed for Bitcoin because
you have to trust a particular hidden service operator, or relatively
small community of hidden service operators, not to attack you by
manipulating your view of the blockchain and transaction history.)

Using Bitcoin over Tor hidden services might be a good choice for most
users today who want their transactions and private key ownership to
be as private as possible, but it's not free of risk, and it's probably
not an appropriate long-term solution to recommend to the general public
without fixes to some of the technologies involved!

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor in other software

2014-10-23 Thread Seth David Schoen
Derric Atzrott writes:

 Good day all,
 
 Would it be useful at all, when developing other software,
 to route its communications through Tor?
 
 I'm mostly just curious if it would be useful to the Tor
 project to design software that makes use of Tor in order
 to help provide more cover traffic for the Tor network.

There was just a new article suggesting that using Tor can be
counterproductive for Bitcoin:

http://arxiv.org/abs/1410.6079

There's an older article suggesting that it's also a problem for
BitTorrent:

https://www.usenix.org/legacy/event/leet11/tech/full_papers/LeBlond.pdf

Maybe the lesson of this is that applications starting with Bit-
have anonymity risks from using Tor. :-)

More seriously, the Tor Project has traditionally encouraged people
to make various things run over Tor, and there are definitely things
that run over Tor other than web browsing, including TorBirdy, Pond,
and OnionShare (which is sort of web browsing).

I think it would be great if someone who's read both the Bad Apple and
the Bitcoin over Tor papers could explain if there are any generalizable
lessons about exactly what makes it risky to run a particular service
over Tor.  Maybe that could help future developers make better choices
about how to use Tor.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] (no subject)

2014-10-09 Thread Seth David Schoen
ben ho writes:

 get bridges

Hi,

Unfortunately you sent this to a public discussion list for talking
about Tor, which isn't the right address for requesting bridges.

The right place to send that request is brid...@bridges.torproject.org.

If you do that and your bridges don't work, you can also try other
resources at

https://bridges.torproject.org/

Good luck!

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] isp monitoring tor

2014-10-06 Thread Seth David Schoen
Mirimir writes:

 Tor is vulnerable to two general sorts of attacks. One involves the use
 of malicious relays in various ways to deanonymize circuits. The other
 involves the use of traffic analysis to correlate traffic captured at
 edges of the Tor network (to users and the websites that they access).
 
 With ISPs, there's the risk that some organization can monitor traffic
 on both ends. It's common to characterize such organizations as global
 passive adversaries. However, a single ISP (or a firm owning multiple
 ISPs) could do that, if it provides service to both users and websites.
 Also, users who access websites in their own nation via Tor are
 similarly vulnerable to their government.

To expand on this theme, there are several traffic attacks that don't
require an adversary to be truly global.  Creating a popular relay in
the hope that users who are interesting to you will route through it is a
pretty cheap and powerful attack (and one that motivated the creation of
guard nodes).  And there can be timing attacks just based on (sometimes
rather coarse-grained) knowledge of when a particular anonymous user was
active, which might even come from chat or server logs rather than from
monitoring live network traffic, so long as the attacker does have the
ability to monitor the first hop.

I've taken to saying someone who can observe both ends most of the time
instead of the global adversary.  (I think the Tor developers often say
this too; the global adversary is just someone who can _almost always_
observe both ends.)  A kind of challenging wrinkle is that there are
a lot of conceivable ways that someone could observe one end of the
connection.  One sometimes underappreciated way is that someone else who
was observing it at the time of the communication, including a party to
the communication or a server operator, could tell the adversary about
it later.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] wake up tor devs

2014-09-17 Thread Seth David Schoen
Ted Smith writes:

 There's a reason why the NSA has Tor Stinks presentations and not I2P
 stinks presentations. 

I don't know of a good basis for estimating what fraction of NSA's
capabilities or lack of capabilities we've learned about.  And even
when someone _working at NSA_ writes that attack X doesn't work or
doesn't exist, they may not know that attack Y achieves some of the
same goals.  For example, there were press reports that there was
some major cryptanalytic breakthrough a few years ago and that it has
far-ranging implications*.  I don't think the details have ever become
public; a best-case-for-cryptographic-privacy scenario might be that it's
only an operationalized, albeit expensive, attack against 1024-bit RSA
or DH (one of the possibilities considered in Matthew Green's analysis).
In any case, many people working on surveillance within NSA might not know
what the breakthrough is or how it works, and may still be assiduously
working on attacks that in principle are largely redundant with it.

(Their NSA colleagues may want them to be working on redundant attacks
because many of the existing attacks are described as fragile -- so
they want to have parallel ways to achieve some of the same stuff.)

Most of us don't work in highly compartmentalized organizations or
organizations that try to practice a very strict need-to-know rule.
So we might think that if someone in an organization says at some time
that something is easy, or difficult, or cheap, or expensive, that that
reflects the general attitude of all the parts of that organization.
(Like if somebody working at Intel said it was hard to fabricate
semiconductor devices in a particular way, or somebody working at Boeing
said it was hard to take advantage of a particular aerodynamic effect,
or somebody working at EFF said it was hard to sue the government under
a particular legal theory, you might tend to think these things were
basically true, as far as those people's colleagues knew.)

I think that's only approximately or indirectly true of people working
in an organization like NSA or GCHQ.


* Possibly relevant reporting and discussion includes
  http://www.wired.com/2012/03/ff_nsadatacenter/all/
  
http://www.wired.com/2013/09/black-budget-what-exactly-are-the-nsas-cryptanalytic-capabilities/
  http://blog.cryptographyengineering.com/2013/12/how-does-nsa-break-ssl.html
  
http://www.nytimes.com/interactive/2013/09/05/us/documents-reveal-nsa-campaign-against-encryption.html?_r=1;
  (including claims of widespread success at defeating cryptography,
  partly on the basis of sabotaging it but at least partly on the
  basis of development of advanced mathematical techniques)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] I have a quick question about security of tor with 3 nodes

2014-08-28 Thread Seth David Schoen
John Doe writes:

 How can I set the number of relays in the configuration file? Also can you 
 explain why 3 is enough? I hear things of analysis being able to track people 
 trough the various relays they use. This worries me some. Care to help me 
 understand?

https://www.torproject.org/docs/faq.html.en#ChoosePathLength

The link there to the threat model discussion is broken.  A link that
works is

https://svn.torproject.org/svn/projects/design-paper/tor-design.html#subsec:threat-model

Historically, this is one of the most common questions about Tor.

There's evidence that some people have successfully deanonymized some Tor
users, but I don't know of evidence that this has been done by tracing
each individual hop of the path (tracing the users through each relay
in turn) or that there's a case where that would be the easiest way to
deanonymize a user.

I guess it's possible that that would be the easiest way if _all three_
relays are malicious and are working together; the problem with trying to
add more relays as a response to that is that the Tor design has assumed,
seemingly correctly, that having just a malicious entry and exit relay
that are working together is enough to deanonymous a user in practice.
Adding more middle relays can't affect the probability of that situation.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Seth David Schoen
Roger Dingledine writes:

 But in this particular case I'm stuck, because the arms race is so
 lopsidedly against us.
 
 We can scan for whether exit relays handle certain websites poorly,
 but if the list that we scan for is public, then exit relays can mess
 with other websites and know they'll get away with it.

I think the remedy is ultimately HTTPS everywhere.  Then the problem
is reduced to checking whether particular exits try to tamper with the
reliability or capacity of flows to particular sites, or with the public
keys that those sites present.  (And figuring out whether HTTPS and its
implementations are cryptographically sound.)

The arms race of we don't really have any idea what constitutes correct
behavior for these vast number of sites that we have no relationship
with, but we want to detect when an adversary tampers with anybody's
interactions with them seems totally untenable, for exactly the reasons
that you've described.  But detecting whether intermediaries are allowing
correctly-authenticated connections to endpoints is almost tenable,
even without relationships with those endpoints.

(I do think that continuing to work on the untenable secret scanning
methods is great, because attackers should know that they may get caught.
It's a valuable area of impossible research.)

Yan has just added an HTTP nowhere option to HTTPS Everywhere, which
prevents a browser from making any HTTP connections at all.  Right now
that would probably be quite annoying and confusing to Tor Browser users,
but maybe with some progress on various fronts it could become less so.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-29 Thread Seth David Schoen
Joe Btfsplk writes:

 I'm no expert on fine details of this, but over a long time of
 checking TBB, Firefox, JonDo Fox, etc., on multiple test sites, it's
 always clear that far more info is available when JS is enabled.
 The EFF says ~ 33 bits of identifying info (ii) are needed to
 accurately identify the same browser / machine at multiple sites.

Strictly speaking, the 33 bits figure refers to identifying a _person_,
and comes from Arvind Narayanan, who calculated it by rounding down the
base 2 logarithm of the world's human population.  (If you can ask
33 perfectly independent and identically distributed yes-or-no questions
about a person, the set of answers to those questions will be completely
unique.)

There are probably fewer Internet-connected browser instances than
living people, so less information might suffice to distinguish them.

If you're using EFF's Panopticlick page, you should be aware of some
limitations about the measurements it gives you.  One is that it doesn't
measure all possible measurable attributes of a browser -- people doing
user tracking may have additional measurement techniques that aren't
included in Panopticlick.  Another is that the bits of information
that you get from measuring each attribute don't actually add linearly
(and there's no direct way of adding them without knowing more about
the population statistics and how the attributes interact).  So if you
get an estimate that your Foo browser feature contributes 6 bits of
identifiability and your Bar browser features contributes 5 bits, you
can't necessarily conclude that together they contribute 11 bits.
(Another limitation that Peter Eckersley, the developer of Panopticlick,
pointed out to me is that the sample of fingerprints in Panopticlick's
database isn't very current or very representative of a larger population
of user-agents that are getting used in 2014.)

You're definitely right that Javascript is an important part of many
browser fingerprinting techniques and that browser fingerprinting will
work much less well without it.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-29 Thread Seth David Schoen
Mirimir writes:

 Discussions of measured entropy and stuff are too abstract for me. Maybe
 someone can help me with a few simpleminded questions.
 
 About 2.2 million clients are using Tor these days. Let's say that I've
 toggled NoScript to block by default, and that I have a unique pattern
 of enabling particular scripts on particular sites. That is, I'm unique
 among all Tor users. In what ways does that put my Tor use at risk of
 being linked to IP addresses seen by my entry guards?

It means that if you go to site A today, and site B next week, the site
operators (or the exit node operators, or people spying on the network
links between the exit nodes and the sites) might realize that you're
the same person, even though you took mostly or completely separate paths
through the Tor network and were using Tor on totally different occasions.

There are several ways of looking at why this is a privacy problem.
One is just to say that there's less uncertainty about who you are,
because even if there are lots of site A users and lots of site B users,
there might not be that many people who use both.  Another is that you
might have revealed something about your offline identity to one of the
sites (for example, some people log in to a Twitter account from Tor
just to hide their physical location, but put their real name into their
Twitter profile) but not to the other.  If you told site A who you are,
now there's a possible path for site B to realize who you are, too, if
the sites or people spying on the sites cooperate sufficiently.

In terms of identifying your real-world IP address, it provides more
data points that people can try to feed into their observations.  For
example, if someone is doing pretty course-grained monitoring (who
was using Tor at all during this hour?) rather than fine-grained
monitoring (exactly what times were packets sent into the Tor network,
and how many packets, and how big were they?), having a link between
one time that you used Tor and another time that you used Tor would be
useful for eliminating some candidate users from the course-grained
observations.

For instance, suppose that you went to site A at 16:00 one day and to
site B at 20:00 the following day.  If site A and site B (or people
spying on them) can realize that you're actually the same person through
browser fingerprinting methods, then if someone has an approximate
observation that you were using Tor at both of those times, it becomes
much more likely that you are the person in question who was using the
two sites.  Whereas if the observations are taken separately (without
knowing whether the site A user and the site B user are the same person
or not), they could have less confirmatory power.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-29 Thread Seth David Schoen
Mirimir writes:

 The risk from doing that, of course, is that each user will tend to
 customize their NoScript profile in a distinct way. And that will allow
 websites to tell them apart.
 
 Even so, Panopticlick can't report anything about that. For that, one
 would need a version of Panopticlick that's restricted to assessing and
 comparing Tor browser profiles. Right?

Yes, ultimately to make the numbers be meaningful in this sense,
they'd need to measure everything that's realistically measureable by
an adversary, and then they would need a current representative sample
of browsers (or of Tor Browser configurations).

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-29 Thread Seth David Schoen
Mirimir writes:

  For instance, suppose that you went to site A at 16:00 one day and to
  site B at 20:00 the following day.  If site A and site B (or people
  spying on them) can realize that you're actually the same person through
  browser fingerprinting methods, then if someone has an approximate
  observation that you were using Tor at both of those times, it becomes
  much more likely that you are the person in question who was using the
  two sites.  Whereas if the observations are taken separately (without
  knowing whether the site A user and the site B user are the same person
  or not), they could have less confirmatory power.
 
 That's getting perilously close to traffic confirmation, isn't it?

Yes!  But other kinds of fingerprinting could drastically reduce the
fine-grainedness of the observations that you need in order to do a
traffic confirmation-style attack.  Instead of sub-second packet timings
or complete circuit flow volumes or whatever, you might be able to say
something like what approximate times of day on which days was this
person using Tor at all?

It might be interesting to think about this in terms of a paper like
Users Get Routed -- trying to expand understanding of the risk of
attacks, as the authors of that paper say, of user behavior when we
include (1) browser fingerprinting risks in relation to user behavior,
and (2) relatively limited adversaries, including some who didn't have
deanonymizing Tor users as a primary goal.

The Harvard bomb threat case, as I understand it, shows a specific
example of deanonymizing a Tor user by an adversary (Harvard's network
administrators) who did retain some data partly in order to reduce network
users' anonymity, but who didn't seem to have had a prior goal of breaking
Tor anonymity in particular.  And the data that they apparently retained
was more course-grained than what would be ideal for traffic confirmation
attacks in general.

I don't mean that the Harvard case involved browser fingerprinting
at all.  I guess I just mean that browser fingerprinting's relevance
to Tor anonymity might include increasing the information available to
limited network adversaries.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] ISP surveillance.

2014-07-24 Thread Seth David Schoen
Marcos Eugenio Kehl writes:

 Hello experts!
 TAILS, running by usb stick, protect me against forensics tecnics in my pc. 
 Ok. 
 TOR, running as a client only or as a relay, protect (theoretically) my 
 privacy. Ok.
 But... if my static IP, provided by my ISP, is under surveillance by a legal 
 requirement, what kind of data they can sniff?
 
  I mean, my connection looks like a simple HTTPS, or they know I am diving 
 into the Deep Web, hacking the world? Could the ISP capture the downloads 
 dropping into my pc when running TAILS? 
 If so, TOR Socks (proxy + TOR) is the pathway to deceive and blindfold my 
 ISP? 
 
 https://www.torproject.org/docs/proxychain.html.en

Oi Marcos,

Normally Tor doesn't try to hide the fact that you are using Tor.  So,
your ISP can see that you're using it, and when.  Tor only tries to hide
the particular details of what you are doing.

Although some Tor connections do look like simple HTTPS in some ways,
the connections are always made to the IP addresses of Tor nodes, and
the complete list of those addresses is openly published.  So it's easy
for the ISP to notice that you're using Tor, and some firewalls and
kinds of surveillance equipment can be programmed to detect Tor use if
the person operating them cares about it.

There are other methods to try to hide the fact that you're using Tor,
especially meant for people on networks that block Tor.  The main method
of doing this is called bridges, which you can read more about on the
Tor web site.

https://bridges.torproject.org/
https://www.torproject.org/docs/bridges

Most people who use bridges are on networks where Tor is blocked
completely, so they have a very practical reason to try to hide the fact
that they're using Tor.

One of the benefits of Tails is that it will send all of your
communications over Tor.  So, if you believe that Tor is appropriate to
protect you in a particular situation, you can get that protection
automatically when you are using Tails.  Your ISP will not directly see
what you do, although someone who can see both ends of the connection
can try to use information about the time of the connection to identify
you.

Torsocks and configuring Tor to use a proxy are not very relevant to Tails
users.  Torsocks has to do with getting other applications apart from
the Tor Browser to communicate over Tor (which Tails does
automatically!), while configuring Tor to use a proxy is mostly relevant
if you're behind a firewall which doesn't allow direct Internet
connections.  (Sometimes it's an alternative to bridges, but it may not
be a particularly strong way of hiding your activity from your ISP --
it doesn't add any additional encryption or obfuscation.)

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


  1   2   >