Re: [tor-dev] Proposal 320: Removing TAP usage from v2 onion services

2020-05-13 Thread Paul Syverson
On Thu, May 14, 2020 at 12:46:42AM +1000, teor wrote:
> Hi Nick,
> 
> > On 14 May 2020, at 00:09, David Goulet  wrote:
> > 
> > On 11 May (16:47:53), Nick Mathewson wrote:
> > 
> >> ```
> >> Filename: 320-tap-out-again.md
> >> Title: Removing TAP usage from v2 onion services
> >> Author: Nick Mathewson
> >> Created: 11 May 2020
> >> Status: Open
> >> ```
> >> 
> >> (This proposal is part of the Walking Onions spec project.  It updates
> >> proposal 245.)
> >> 
> >> # Removing TAP from v2 onion services
> >> 
> >> As we implement walking onions, we're faced with a problem: what to do
> >> with TAP keys?  They are bulky and insecure, and not used for anything
> >> besides v2 onion services.  Keeping them in SNIPs would consume
> >> bandwidth, and keeping them indirectly would consume complexity.  It
> >> would be nicer to remove TAP keys entirely.
> >> 
> >> But although v2 onion services are obsolescent and their
> >> cryptographic parameters are disturbing, we do not want to drop
> >> support for them as part of the Walking Onions migration.  If we did
> >> so, then we would force some users to choose between Walking Onions
> >> and v2 onion services, which we do not want to do.
> > 
> > I haven't read the entire proposal so I won't comment on its technical 
> > aspect.
> > I was reading and got here and that made me very uncertain about the whole
> > proposal itself.
> > 
> > I will propose that we revisit the overall idea of changing v2 here.
> > 
> > I personally think this is the wrong approach. Onion services v2 should be
> > deprecated as in removed from the network instead of being offered as a 
> > choice
> > to the users.
> > 
> > We haven't properly done a deprecation path yet for v2 primarly due to our
> > lack of time to do so. But at this point in time, where the network is 100%
> > composed of relays supporting v3 now (which took 3+ years to get there), it 
> > is
> > time for v2 to not be presented as a choice anymore.
> > 
> > It is a codebase that is barely maintained, no new features are being added 
> > to
> > it and thus moving it to ntor means another at least 3 years of network
> > migration. This would mean a major new feature in that deprecated code 
> > base...
> > 
> > So thus, I personally will argue that moving v2 to ntor is really not the
> > right thing to do. Onion service v2 are, at this point in time, _dangerous_
> > choice for the users.
> 
> I agree that we shouldn't support old features forever. And it seems unwise
> to spend development effort just to migrate away from TAP, when we could
> instead spend that time migrating away from TAP and v2 onion services.
> (And reducing our dependency on SHA1 and RSA keys.)
> 
> Strategically, it also seems unwise to carry v2 onion services, TAP
> handshakes, RSA relay keys and signatures, and SHA1 into walking onions.
> 
> But it's hard to make these kinds of decisions without approximate
> timeframes.
> 
> How long would it take to migrate away from v2 onion services?
> 
> How long would it take to introduce walking onions?
> 
> If we decide to modify v2 onion services, how long would that migration
> take? And what's the final plan to end the modified v2 onion services?
> 

I completely agree about not maintaining things forever and that there
are security reasons for abandoning v2 (much) sooner than later, but
as always I don't think we can just blanketly state what is a
dangerous choice for users without specifying a usage and adversary
context. I'm not trying to open a discussion of how dangerous this is
or getting people to give that specification, only cautioning against
such unqualified statements.

Another element in this decision making is whether to take into
account and engage with the userbase for v2 onion services. The most
salient case is probably facebook, but there may be others with a
significant amount vested in specific v2 addresses. We could just make
decisions about a timeline and inform them, or we could engage with at
least some of the more popular or larger v2 onion services to see if
they have reasons e.g. why officially announcing EOL in e.g. 3 months
is fine, but 2 months would make for craziness for them.

Si Vales, Valeo,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] CVE-2020-8516 Hidden Service deanonymization

2020-02-04 Thread Paul Syverson
On Tue, Feb 04, 2020 at 04:15:23PM -0500, David Goulet wrote:
> On 04 Feb (19:03:38), juanjo wrote:
> 
[snip]
> 
> And the reason for private nodes is probably because this way you eliminate
> noise from other tor traffic so _anything_ connecting back to your ORPort is
> related to the onion service connections you've done. You don't need to filter
> out the circuits with some custom code (which is very easy to do anyway).
> 
> That is unfortunately a problem that onion service have. These types of guard
> discovery attacks exists and they are the primary reasons why we came up with
> Vanguards couple years ago:
> 
> https://blog.torproject.org/announcing-vanguards-add-onion-services
> 

Indeed. Just to underscore the point: we demonstrated those attacks
in the wild and proposed versions of vanguards in the same work where
we introduced guards in the first place, published way back in 2006.

> But one thing for sure, simply forcing rendezvous points to be part of the
> consensus will _not_ fix this problem as it is fairly easy to pull this type
> of attack by simply using a normal relay within the consensus.
> 
+1

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 300: Walking Onions: Scaling and Saving Bandwidth

2019-02-08 Thread Paul Syverson
Hi Nick,

This is awesome. We at NRL discussed a very similar concept starting
about a year and half ago after going over the PIR-Tor paper in a
reading group. We've left it mostly backburnered since then, though I
thought we had talked about it a few times to people at the Tor Dev
meetings.

Anyway GMTA and now we don't have to wonder when we'll get around to
it since you're doing it. The core ideas we had are in this Proposal
already I think.  Here are a few thoughts that we've discussed that I
didn't see mentioned.

For handling exits:
Including just exit ports along with the index of relays on the list
given to clients doesn't seem like much overhead. 

For ASes, MyFamily, /16 separation etc.: 
The first and (especially) middle relay could be expected to know the
full consensus and enforce these and (provably) indicate a violation
of these policies if they occur. Alternatively if a client requests of
a guard or middle relay through which it is building a circuit,
descriptors of a handful (say 5) relays selected by index, then the
client can make this decision amongst those or start again if they all
fail policy.  (Clients might also make use of previously cached
descriptors This is another potential source of leakage, which is
reduced by including a relay index of an already cached descriptor in
a request if it is a candidate to be selected since it is know to
match, e.g., needed exit ports as well as family, etc. constraints.)
More issues, but it's an idea probably worth contemplating.

aloha,
Paul


On Tue, Feb 05, 2019 at 12:02:50PM -0500, Nick Mathewson wrote:
> Filename: 300-walking-onions.txt
> Title: Walking Onions: Scaling and Saving Bandwidth
> Author: Nick Mathewson
> Created: 5-Feb-2019
> Status: Draft
> 
> 0. Status
> 
>This proposal describes a mechanism called "Walking Onions" for
>scaling the Tor network and reducing the amount of client bandwidth
>used to maintain a client's view of the Tor network.
> 
>This is a draft proposal; there are problems left to be solved and
>questions left to be answered.  Once those parts are done, we can
>fill in section 4 with the final details of the design.
> 
> 1. Introduction
> 
>In the current Tor network design, we assume that every client has a
>complete view of all the relays in the network.  To achieve this,
>clients download consensus directories at regular intervals, and
>download descriptors for every relay listed in the directory.
> 
>The substitution of microdescriptors for regular descriptors
>(proposal 158) and the use of consensus diffs (proposal 140) have
>lowered the bytes that clients must dedicate to directory operations.
>But we still face the problem that, if we force each client to know
>about every relay in the network, each client's directory traffic
>will grow linearly with the number of relays in the network.
> 
>Another drawback in our current system is that client directory
>traffic is front-loaded: clients need to fetch an entire directory
>before they begin building circuits.  This places extra delays on
>clients, and extra load on the network.
> 
>To anonymize the world, we will need to scale to a much larger number
>of relays and clients: requiring clients to know about every relay in
>the set simply won't scale, and requiring every new client to download
>a large document is also problematic.
> 
>There are obvious responses here, and some other anonymity tools have
>taken them.  It's possible to have a client only use a fraction of
>the relays in a network--but doing so opens the client to _epistemic
>attacks_, in which the difference in clients' views of the
>network is used to partition their traffic.  It's also possible to
>move the problem of selecting relays from the client to the relays
>themselves, and let each relay select the next relay in turn--but
>this choice opens the client to _route capture attacks_, in which a
>malicious relay selects only other malicious relays.
> 
>In this proposal, I'll describe a design for eliminating up-front
>client directory downloads.  Clients still choose relays at random,
>but without ever having to hold a list of all the relays. This design
>does not require clients to trust relays any more than they do today,
>or open clients to epistemic attacks.
> 
>I hope to maintain feature parity with the current Tor design; I'll
>list the places in which I haven't figured out how to do so yet.
> 
>I'm naming this design "walking onions".  The walking onion (Allium x
>proliferum) reproduces by growing tiny little bulbs at the
>end of a long stalk.  When the stalk gets too top-heavy, it flops
>over, and the little bulbs start growing somewhere new.
> 
>The rest of this document will run as follows.  In section 2, I'll
>explain the ideas behind the "walking onions" design, and how they
>can eliminate the 

Re: [tor-dev] Standardizing the 'onion service' name

2018-04-26 Thread Paul Syverson
Thanks teor for spelling things out in moderate detail and Steph for
commenting.  

My tl;dr is to basically concur with everything they both said:  The
further from user-facing and the more embedded down in the codebase,
variables, code comments, controller commands, etc. the less important
to spend effort eliminating such vestiges from existing text. Going
forward, certainly any code comments and e.g. any commands that won't
break things should use current terminology.

aloha,
Paul

On Thu, Apr 26, 2018 at 11:07:27AM -0400, Stephanie A. Whited wrote:
> Hi!
> 
> Thanks for adding me to the thread.
> 
> 
> On 4/26/18 3:34 AM, teor wrote:
> > Hi All,
> >
> > There seems to be some confusion in this thread about the
> > current state of the hidden service to onion service name transition.
> >
> > So I'm going to describe the state of things as they are, then try to
> > describe what would need to be done.
> >
> > I'd also appreciate feedback from Steph and others on our priorities for
> > transitioning to the onion service name. I think we have been prioritising
> > user-facing text. (The blog, website, Tor Browser, metrics,  etc.)
> Yes, user, funder, and press facing text has consistently been using
> onion services at least since I've been on board. When I first joined,
> the message to me was that this was changed a couple years prior but
> changeover had been slow. I realize now a lot of folks didn't get that
> message.
> 
> We have already, with the help of Kat, updated hidden to onion services
> where possible on the website. I think the exception to this is older
> blog posts.
> 
> I realize it is not as easy to change some of the other elements, like
> the directory protocol, but I think it's important to make what changes
> are possible. I think it'd have a very positive impact to see these
> through.
> 
> Being aligned will help us build trust with the press, new users, etc.
> There are several reasons the name changed, if it is helpful to share
> more about that lmk.
> 
> >
> > Is this a sensible way of prioritising things?
> >
> > On 26 Apr 2018, at 16:42, Paul Syverson <paul.syver...@nrl.navy.mil
> > <mailto:paul.syver...@nrl.navy.mil>> wrote:
> >
> >>> Have OnionService aliases for controller commands, events, 
> >
> > These are current called "hidden service" or an abbreviation.
> >
> > Tor could add an alias mechanism for controller commands, events,
> > and fields, and use it to do the rename:
> >
> > https://trac.torproject.org/projects/tor/ticket/25922
> > <https://trac.torproject.org/projects/tor/ticket/25922#ticket>
> >
> > I don't think they are as high a priority as the torrc options and man
> > page.
> >
> >>> descriptor
> >>> fields
> >
> > These are currently called "hidden service", or an abbreviation.
> >
> > Descriptor fields are part of the directory specification and
> > implementation, and they are highly technical. So I'm not sure we gain
> > much from aliasing them or renaming them.
> >
> > Similar arguments might apply to other codebases:
> > * Onionoo
> > * stem
> > * consensus health
> > * Tor (network daemon)
> >
> > But the following user-facing applications should add documentation or
> > change names, if they haven't already:
> > * Relay Search / metrics website
> >   * uses HSDir for relay search, because that's what it's called in the
> >     directory protocol
> >   * uses "onion service" for statistics
> > * Tor Browser
> >   * uses "onion site"
> > * the Tor website
> > * new tor blog posts
> Website and new posts are covered.
> >
> >>> and anything else referencing 'HS' or 'HiddenService'.
> >
> > We considered adding OnionService* torrc option aliases for every
> > HiddenService* option in 0.2.9. But we deferred that change because we
> > ran out of time.
> >
> > All we need to do is add some new entries in the alias table, then do a
> > search and replace in the tor man page:
> > https://trac.torproject.org/projects/tor/ticket/17343
> >
> >>> Speaking of which, how do we plan to replace abbreviations? Having an
> >>> 'OSFETCH' or 'OS_CREATED' event doesn't exactly have the same ring as
> >>> their HS counterparts. ;P
> >
> > That's a good question.
> >
> > OS conflicts with "operating system", so we could use:
> > * Onion
> > * OnSrv
> > * no abbreviations
> > Or any other colour you want to paint the bikeshed.
> &

Re: [tor-dev] Standardizing the 'onion service' name

2018-04-26 Thread Paul Syverson
On Wed, Apr 25, 2018 at 05:18:32PM -0700, Damian Johnson wrote:
> Hi all, teor suggested engaging the list with #25918 so here we go!
> Ticket #25918 has a couple goals...
> 
> 1. Provide a tracking ticket for the rename effort.
> 2. Come to a consensus on if we should move forward with "onion
> service" or revert back to "hidden service". The limbo we've been in
> for months is confusing for our users and we should standardize on a
> name.

I'm very confused why you say that this is not a long solved problem.
I see nothing in the recent posts about v2 deprecation that would in
any way change that, or even raise it as a topic.

When talking in general to people, the answer should be that they are
all (v2 and v3) onion services. That's it. I believe this is the
official position of the Tor Project, and they have worked hard to
make sure that this is reflected on any new materials on the site for
some time.  I'm cc'ing Steph in case she is not on the tor-dev list
and wants to say anything further on that point.

I'll respond to other comments inline below. Feel free to add my comments
to the ticket if you want, but IMO there is no reason a ticket should
exist at all for this.

> 
> Here's the ticket...
> 
> 
> 
> A recent post on tor-dev@ just got me thinking about the roadblocks we
> have for v2 deprecation. There's a couple I don't believe we're
> following in trac so lets fix that.
> 
> For me the biggest is its name. Renaming takes work, and we attempted
> to rename hidden services in v3 without investing the time to make it
> happen. We should either fix that or revert to to the old name. To
> move forward we need to...

I don't understand this at all. We have renamed hidden services to be
called 'onion services' some time ago. I don't know what it is that
you feel we didn't make happen. 'Hidden services' was an old name for
onion services that has always been misleading narrow about the nature
of these services and thus long in need of replacing (and actually the
name we originally and for some time used was 'location hidden
services' though 'location' simply started getting more and more
ignored along the way).  "Misleadingly narrow" because some of their
central properties are ignored by calling them 'hidden services',
viz. the stronger and more-site-owner-controlled authentication these
services provide and their address lookup security vs. the less secure
parts of the internet served by DNS.

> 
> Have OnionService aliases for controller commands, events, descriptor
> fields, and anything else referencing 'HS' or 'HiddenService'.
> 
> Speaking of which, how do we plan to replace abbreviations? Having an
> 'OSFETCH' or 'OS_CREATED' event doesn't exactly have the same ring as
> their HS counterparts. ;P
> 
> Adjust all our docs to be consistent about the name.

Right, anything v3 should be consistently calling these 'onion
services'.  Variable names, etc. particularly those still in use in
code shared with v2, don't need to be changed. It is OK if such
vestiges of older usage remain in abbreviations, as long as the
description of them, e.g., in any new Tor proposal, describes them
with appropriately current terminology.

> 
> Renaming takes work. Lesson I learned from Nyx is that it works best
> if you draw a line in the sand and stand by it. With Nyx, version 2.0
> is called Nyx (you won't find any docs saying otherwise) and version
> 1.x is the legacy 'arm' project.
> 
> If I was in your shoes I'd opt for the same. Either prioritize the
> aliases and be firm that v3 are 'Onion Services' or abort the rename.
> Otherwise this will live in a confusing dual-named limbo land
> indefinitely. ;P

I'm prettty sure that v3 being 'onion services' has been the official
Tor Project position since at least half a year. We wouldn't be
aborting the rename, because 'abort' would imply it is not
complete. Anything now not using the current name is not part of an
incomplete process, it is simply wrong and outdated. Steph correct me
if I am wrong about that.

So I think you've answered your own question. Nothing in v3 should be
called 'hidden services'. And anything new in code and documentation
should be called 'onion services'. If you want to think of v2 and
earlier as 'hidden services' for purposes of understanding legacy
component and variable names, e.g., HSDir that's fine. And as such,
variable names, etc. in code that continues to be used for both v2 and
v3 can can persist. But again, any new specs, documentation, etc.
should call them 'onion services'.

This acceptance of existing v2 documentation calling them 'hidden
services' while refusing this for anything v3 is a little misleading
about when and why the naming transition happened, but its close
enough to serve as your line in the sand if you feel one is needed. 
I actually argued the value of essentially such a line-in-the-sand
position to Steph a while ago. This doesn't preclude also calling v2
and earlier 'onion 

Re: [tor-dev] Request for feedback/victims: cfc

2016-03-23 Thread Paul Syverson
On Wed, Mar 23, 2016 at 12:33:15PM -0400, Adam Shostack wrote:
> Nice!
> 
> Random thought: rather than "unreachable from Tor", "unreachable when
> using the internet safely."  This is really about people wanting
> security, and these companies not wanting to grapple with what their
> customers want.

Yes! Not random at all. When trying to succincly contrast current means
to access and use registered-domain sites vs. onionsites I not infrequently
slip into calling them the insecure web and the secure web respectively.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Notes from 1st Tor proposal reading group [prop241, prop247, prop259]

2016-01-19 Thread Paul Syverson
Hi George,

Crap. I missed this buried at the bottom of Nick's general
announcement last Thursday about reviewing Tor Proposals (which was
in my big backlog of threads to get to, and I did not notice its specific
relevance to guards and onion services till I saw here).

When is the next one of these (guard proposals review discussion)? Are
they listed on some general schedule somewhere? I see the reference to
the little t-tor meeting tomorrow. (A) when is that? (B) Is there an
agenda? Will the whole thing be devoted to discussing these three proposals?
(I don't know if/when I can participate given other obligations
but would like to know what when will be happening.)

aloha,
Paul


On Tue, Jan 19, 2016 at 10:41:23PM +0200, George Kadianakis wrote:
> Today was the first Tor proposal reading group [0] and we discussed the
> following guard proposals:
> 
>Prop#241: Resisting guard-turnover attacks [DRAFT]
>Prop#259: New Guard Selection Behaviour [DRAFT]
>Prop#247: Defending Against Guard Discovery Attacks using Vanguards 
> [DRAFT]
> 
> In this mail, I will assume that the reader is familiar with the concepts
> behind those proposals.
> 
> We started by discussing prop241 and prop259. These proposals specify how Tor
> should pick and use entry guards.
> 
> - We decided that we should merge the remaining ideas of prop241 into prop259.
> 
> - prop259: The original guardset should also include 80/443 bridges (shouldn't
>   have disjoint utopic/dystopic guard sets). We should specify a behavior on
>   how the fascist firewall heuristic should work.
> 
> - prop259: Should probably not impose a specific data structure to the
>   implementor except if strictly required. Instead maybe state the properties
>   that such a data structure needs. Maybe we can put the hashring idea in the
>   appendix?
> 
> - We can simulate the various algorithms we are examining to test their
>   behavior and correctness.  Nick and Isis have already written some guard
>   switching code to be simulated: 
> https://github.com/isislovecruft/guardsim.git
> 
>   However, no simulations happen right now. We should code some simulation
>   scenarios and check that the algorithm works as intended in all possible 
> edge
>   cases: 
> https://github.com/isislovecruft/guardsim/blob/master/doc/stuff-to-test.txt
> 
> We then moved to discussing prop247. Proposal 247 specifies how entry guards
> should be used by hidden services to avoid various attacks:
> 
> - We can think of prop241 as the proposal specifying how entry guards work on
>   Tor. It specifies how Tor selects its set of guards and also how it picks 
> and
>   switches between them.
> 
>   Then prop247 could be stacked on top of prop241 specifying how entry guards
>   are used specifically in _hidden services_ and describing how the guardsets
>   from prop241 can be used in the hidden services setting.
> 
>   To achieve this design we should figure out what we need from Tor guardsets
>   to achieve all the needs of prop247, and then we should design the interface
>   of guardsets appropriately in prop241.
> 
>   A stupid Guardset interface that prop247 could use could be:
> guardset_layer_1 = Guardset(nodes_to_consider, n_guards=1, 
> rotation_period_max, flags, exclude_nodes)
> guardset_layer_2 = Guardset(nodes_to_consider, n_guards=4, 
> rotation_period_max, flags, exclude_nodes=guardset_layer_1)
> 
> - We discussed how the HS path selection should happen in prop247.
> 
>   Should layer-2 and layer-3 vanguards be picked from the set of Guard nodes,
>   or should they be middle relays? This is important to figure out both for
>   security and performance!
> 
>   Also, it's clear that layer-2 vanguards should not be the same node as the
>   layer-1 guard. But what about layer-3 vanguards? Can they be the same node 
> as
>   the layer-1 guard? If not, then an attacker learns information about the
>   layer-1 guard by keeping track of the list of layer-3 vanguards by 
> monitoring
>   many layer-3 rotations.
> 
>   Also, should layer-3 guardsets share nodes between them or should they be
>   disjoint?
> 
>   We should be very careful about what kind of relays we allow in each 
> position
>   since it's clear that it has security implications. We should edit the
>   proposal and specify this further.
> 
> - We should test our design here with a txtorcon test client, and get some
>   assurance about the performance and correctness of the system. Also, we need
>   to see how CBT interacts with it.
> 
> If you want to help with any of the above, show up for the little-t-tor 
> meeting
> tomorrow.
> 
> ---
> 
> [0]:  https://lists.torproject.org/pipermail/tor-dev/2016-January/010219.html
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org

Re: [tor-dev] Proposal: Stop giving Exit flags when only unencrypted traffic can exit

2016-01-06 Thread Paul Syverson
On Wed, Jan 06, 2016 at 10:21:31PM +1100, Tim Wilson-Brown - teor wrote:
> 
> > On 6 Jan 2016, at 21:26, Virgil Griffith  wrote:
> > 
> > Tom, to ensure I understand you clearly, is your argument that
> > relays that export only unencrypted shouldn't get the Exit Flag
> > because insecure/unecrypted traffic "isn't what Tor is intended
> > for?" I want to be sure that I'm fully understanding your
> > proposal.
> 
> If adversaries can set up Exit relays that only permit
> insecure/unecrypted traffic, then they can inspect/interfere with
> all the traffic going through that Exit. As can any adversary that
> is on the upstream path from that Exit.
> 
> If we ensure that Exits must pass some encrypted traffic, then
> running an Exit is less attractive to an adversary. And even
> adversaries contribute useful, secure bandwidth to the Tor Network.

Modulo them not simply setting up an acceptable policy but then just
dropping all (much) actual traffic for the ports they didn't really
want.  (And correct attribution and sanctioning for non/incomplete
performing are hard.)  As always, if the adversarial goal is
monitoring, it is typically just easier (and not too expensive) to
genuinely provide the service that gets you the flags, but yes this
could still be an improvement vs. status quo.

aloha,
Paul




> 
> So this policy is intended to protect users, and encourage non-adversarial 
> contributions to network bandwidth.
> (Given the small number of Exits flags affected by this change, I'm not sure 
> if this policy is responsible for all the good Exits, or if our exit-checking 
> tools are responsible.)
> 
> Tim
> 
> Tim Wilson-Brown (teor)
> 
> teor2345 at gmail dot com
> PGP 968F094B
> 
> teor at blah dot im
> OTR CAD08081 9755866D 89E2A06F E3558B7F B5A9D14F
> 



> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Traffic correlation attacks on Hidden Services

2015-12-24 Thread Paul Syverson
Hi Virgil,

On Thu, Dec 24, 2015 at 06:08:51AM +, Virgil Griffith wrote:
> I've been looking into simple graph-theoretic metrics for Roster to
> quantifying Tor's susceptibility to traffic correlation attacks, mostly
> using BGPStream, https://bgpstream.caida.org/ .
> 
> All of the academic literature I've read talks about the risk to Tor users
> of an AS being in the path between client <-> guard + exit <-> destination.
> 
> Questions:
> (1) To ensure I'm not measuring the wrong thing, can someone be more
> specific on the correlation attack scenario for Tor hidden services?
> 
> (2) Just guessing, but would be it be the same but replace "exit <->
> destination" with: "HS server <-> HS guard" ?

To a first approximation, yes.

> 
> (3) If yes to (2), the natural solution is simply to install a Tor relay on
> the HS server itself so that there's no ASpath between the two?

A. This is a solution for a limited type of onion service providers.
People who are not in a position to have an IP-identified server on
the Tor network couldn't do this. People wanting to set up an
onion service where they can only make outbound connections couldn't
do this. People running ricochet or onionshare couldn't do this, etc.

B. Ignoring A: There is still an AS path between the onion service
server/relay and all the other relays. It is true that this hides
the connecting to the Tor network activity of the service qua client
vs. being a relay (which I think we first noted in "Towards an
Analysis of Onion Routing Security", though that was pre-Tor).
But if this were the common practice, then listing all the onion
service IP addresses would become trivial: they're all at publicly
listed Tor relays. So someone looking for a hidden service and owning
some ASes could do the correlations, except now they have a handy
and significant filter to look first for these at Tor relays.
Worse, since you are essentially doing away with guards
for onion services, anyone running a middle relay (easiest thing to
get in the network quickly) will be able to do the correlation you
were trying to prevent at the AS level. Our analysis of vulnerability
onion service connections to the network in just such a way was what
led to the introduction of guards in the first place (Cf. "Locating
Hidden Servers".) As with anything real, you gain some and lose
some in making such a change. But it seems that overall you lose
more.

> 
> Comments greatly appreciated.  I'm not an internet routing expert and I
> want to ensure Roster is incentivizing the right things to harden the
> network.
> 

HTH.

May the season make sense to you and yours,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special handling of .onion domains in Chrome/Firefox (post-IETF-standarization)

2015-11-02 Thread Paul Syverson
On Mon, Nov 02, 2015 at 09:05:26PM +0200, George Kadianakis wrote:
> Hello,
> 
> as you might know, the IETF recently decided to formally recognize .onion 
> names
> as special-use domain names [0].
> 
> This means that normal browsers like Chrome and Firefox can now
> handle onion domains in a special manner since they know that they
> only correspond to Tor.
> 
> How would we like those browsers to treat onions?
> 
> For starters, those browsers should refuse to connect to onion
> domains entirely.  Onions don't work on normal browsers anyway, and
> also this will reduce the onion leakage through the DNS system [1].

Well, maybe not "entirely". Cf. below.

> 
> An extra measure would be to persuade those browser vendors to
> display some sort of message to poor people who click onions using
> their normal browser. For example they could display:
> 
>   Oops, seems like you visited an onion link.  You
>   need a special anonymous browser for this:
>   www.torproject.org

It might be a better idea to point them to tor2web. For one thing
browser providers will be happier with a display that doesn't directly
tell people they need a different browser to get to an intended
address. The display could say something like:

  Oops, seems like you attempted to visit an onion address, a
  specialized address that provides additional security for
  connections to it. The site can be reached via proxy at
  [tor2web-link-to-relevant-onionsite]. To obtain the intended
  security for access to such sites, follow 
  these few simple steps .

No doubt some wordsmithing could make this better in various respects
(amongst them, shorter).
  
> 
> 
> What else could we do here? And is there anyone who can lobby for the right
> behavior? :)
> 
> Of course, we all know that that inevitably those browsers will need
> to bundle Tor, if they want to visit the actually secure onion
> Internet. But let's give them a bit more time till they realize this
> :)

I think something like the above improves the transition path, helping
the world along to better security instead of just waiting for the
world to catch up. (And in any case, perhaps at least a few more
months work would better prepare us for the resulting attention.)

aloha,
Paul

> 
> Cheers!
> 
> [0]: 
> https://blog.torproject.org/blog/landmark-hidden-services-onion-names-reserved-ietf
>  https://www.rfc-editor.org/rfc/rfc7686.txt
>  
> https://www.iana.org/assignments/special-use-domain-names/special-use-domain-names.xhtml
> 
> [1]: https://www.petsymposium.org/2014/papers/Thomas.pdf
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Services and NAT Punching

2015-10-04 Thread Paul Syverson
On Wed, Sep 30, 2015 at 05:12:53PM +0200, Tim Wilson-Brown - teor wrote:
> Hi All,
> 
> Do you know a use case which needs Single Onion Services and NAT punching?
> 
> We’re wondering if there are mobile or desktop applications /
> services that would use a single onion service for the performance
> benefits, but still need NAT punching. (And don’t need the anonymity
> of a hidden service.)
> 
> Single Onion Services:
> * can’t do NAT punching, (they need an ORPort on a publicly
>accessible IP address),
> * locations are easier to discover, and
> * have lower latency.

Note that we considered making the single-onion services proposal (Tor
Proposal 252) include a NAT punching option. We didn't for a
few reasons.

1. Get the proposal out there. We can always add other protocols
either as part of the proposal or in a new one.

2. Double-onion services already provide NAT punching. The performance
delta of a sesqui-onion service (onion circuit on one side, roughly
half an onion circuit on the other) is not as significant as for plain
single-onion services and so yet another protocol to design, maintain,
keep compatible, be a counter to new design innovations might not be
worth it.

3. Most importantly, Tor generally follows the onion routing
philosophy of making trade-offs that make more users more secure with
an eye to making the most vulnerable or sensitive the norm.

On the one hand this means things like building for interactive
circuits w/ relatively low latency. This is in theory less secure
than, e.g., mix based remailers against global external observing and
partial relay compromising adversaries.  But in practice this leads to
much larger networks (making it harder to be global) and leads to
millions vs. hundreds of users with greater diversity of use
motivation and behavior (making the fact of usage less interesting of
itself to adversaries). Cf. my "Why I'm Not An Entropist".

On the other hand, we made the circuits use three relays. Most users
would most of the time likely be fine with a one-relay circuit. By
this I mean that an adversary that actually intends to do something
that is harmful to them intentionally or collaterally is likely
countered by a one-relay circuit, which would have a significant
performance impact. But this would mean that the users who do need and
use three-relay circuits would be much more rare and interesting, easy
to separate out, etc. Also the relays themselves become more valuable
to compromise (or set up your own, or bridge by owning the ISP) to an
adversary, which increases threat to the network itself. For these and
other reasons the default for tor circuits has three relays.

Now let's apply this worldview to the sesqui-onion NAT punching case.
In a world with single-onion services and double-onion services, this
is splitting off the double-onion anonymity set rather than the
single-onion set, regardless of what Tor Proposal the protocol is in.
So, the users that do require the server location protection that
double-onion services provide becomes a much smaller fraction making
them more likely interesting/worth pursuing/easier to identify/less
plausibly able to claim they only wanted to have better circuit
authentication/etc. than if the sesqui-onion services were not an
equally easy option to set up.

Also, given at best ambiguously user-understood threat environments,
and the well-documented tendency to choose in practice
performance/ease against hyperbolic discounting of threats for using
encryption and other security measures, we can assume that many will
opt for the better performing choice of sesqui-onion services when
perhaps they should not have. All the more so in the less-easily
understood case of onion services vs. plain
client-to-public-destination Tor use. Similar also to the one
vs. three relay option, pwning relays by any of the means mentioned
two paragraphs above makes it more effective to identify onion
services in the sesqui-onion case. Thus putting additional threat
pressure on the network itself. (I recognize similar things could be
said of single vs. double onion services in general. I have some
responses there, but I am already going on overly long as usual.)

These to me are counter-arguments to the advantages of a NAT punching
sesqui-onion protocol. I don't question the many clear advantages of
having such a protocol. But to me the above make it too great a
trade-off to develop them for easy deployment. I don't think the
answers concerning this trade-off are just obvious. So I encourage
continued examples and discussions of their use. But I would like to
be convinced that they outweigh the above (and possibly other examples
of) trade-offs before I would support their development and promotion.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Desired exit node diversity

2015-09-23 Thread Paul Syverson
On Wed, Sep 23, 2015 at 11:34:54AM +, Virgil Griffith wrote:
> > because "the right distribution" is a function of which adversary you're
> > considering, and once you consider k adversaries at once, no single
> > distribution will be optimal for all of them.)
> 

I agree with Roger that ideally all relays can be exits (and since
we're being ideal, we'll assume that 'exit' means to every port). And
the network location distribution of relays by bandwidth is
proportional to both the client destination selection over time and
general Internet traffic over time, which match each other since we're
being ideal, and also matter since we're using an ideal trust-aware path
selection algorithm. And network wide route selection is such that
there is no congestion (generalizing Roger's assumption of infinite
exit capacity). Also all fast-relay operators (which here is the same
as all relay operators) don't merely get a T shirt but a pony wearing
a T shirt. Put differently, I need your ceteris paribus clause spelled
out a lot more so I know what things I can assume in this ideal world
and where I have to live with the actual world (to the extent that
we even know what that looks like).

> Granted.  But since we're speaking idealizations, I say take that the
> expected-value over the distributions weighted by the probability of each
> adversary.  In application this would be a distribution that although
> unlikely to be optimal against any specific adversary, it's has robust
> hardness across a wide variety of adversaries.

In our ongoing work on trust-aware path selection, we assume a trust
distribution that will be the default used by a Tor client if another
distribution is not specified. (Most users will not have a reasoned
understanding of who they actually need to worry most about, and even
if they somehow got that right would not have a good handle how that
adversary's resources are distributed.)  We call this adversary "The
Man", who is equally likely to be everywhere (each AS) on the
network. For relay adversaries, we assume that standing up and running
a relay has costs so weight a bit to make relays that have been around
a long time slightly more likely to be trusted.

> 
> Or, if that distribution is unclear, pick the distribution of exit-relay
> with the highest minimum hardness.  This reminds me of the average-entropy
> vs min-entropy question for quantifying anonymity.  I'd be content with
> either solution, and in regards to Roster I'm not sure the difference will
> matter much.  I am simply asking the more knowledgeable for their opinion
> and recommendation.  Is there one?

I don't think you can meaningfully do this. It's going to be based on
a particularly bad closed-world assumption (worse than the one
underlying so many website fingerprinting analyses). You would have to
assume that you know all the adversaries that all the user types have
and, if you are averaging in some way, then also the average amount of
exit utilization that each user type represents. Ignore the technical
impossibility of doing this in a privacy-safe way and then ignore the
technical impossibility of doing this in a privacy-unsafe way. You
would then be faced with the political nightmare of issuing default
policies that tells users they should route with a weighting that says
country foo has an x percent chance of being your adversary, but
country bar has a y percent chance. (Likewise also have similar
statements that substitute 'large multinational corp.', 'major
criminal organization', 'specific big government agency that is
getting all the press lately' etc.  for "country" in the last
sentence.)

aloha,
Paul

> 
> -V
> 
> 
> 
> On Wed, Sep 23, 2015 at 2:47 PM Roger Dingledine  wrote:
> 
> > On Wed, Sep 23, 2015 at 06:26:47AM +, Yawning Angel wrote:
> > > On Wed, 23 Sep 2015 06:18:58 +
> > > Virgil Griffith  wrote:
> > > > * Would the number of exit nodes constitute exactly 1/3 of all Tor
> > > > nodes? Would the total exit node bandwidth constitute 1/3 of all Tor
> > > > bandwidth?
> > >
> > > No. There needs to be more interior bandwidth than externally facing
> > > bandwidth since not all Tor traffic traverses through an Exit
> > > (Directory queries, anything to do with HSes).
> > >
> > > The total Exit bandwidth required is always <= the total amount of Guard
> > > + Bridge bandwidth, but I do not have HS utilization or Directory query
> > > overhead figures to give an accurate representation of how much less.
> >
> > On the flip side, in *my* idealized Tor network, all of the relays are
> > exit relays.
> >
> > If only 1/3 of all Tor relays are exit relays, then the diversity of
> > possible exit points is much lower than if you could exit from all the
> > relays. That lack of diversity would mean that it's easier for a relay
> > adversary to operate or compromise relays to attack traffic, and it's
> > easier for a network adversary to see more of the network than we'd like.
> 

[tor-dev] Design for onionsite certification and use with Let's Encrypt

2015-08-24 Thread Paul Syverson
Hi Alec, Seth, Peter, Mike, all,

I'm enthused about the progress Alec reported about the Onion RFC for
certs for onion addresses in recent tor-dev posts and elsewhere.

I wanted to further discuss a design for binding .onion addresses with
registered (route-insecure) addresses. This ties in to in-person
discussions had with Seth, Peter, and Mike back in June about how this
all dovetails with Let's Encrypt and Tor Browser, which is why I am
also addressing this message to them directly. I hope they can comment
on whether this design seems realistic in that regard and any major
caveats, stumbling blocks, etc.

I'll start with a description of goals and complementarity comments
about readability of onion addresses themselves, and other recent
tor-dev discussion topics. 

I did a partially related post to tor-assistants on one-sided onion
services back in June that covered perhaps too many alternatives
concerning onion services and too many goals and too much of the
motivation, and none of that adequately separated. I think it left
most scratching their heads. This is an attempt to be a bit narrower
and hopefully clearer. Those not interested in even the briefly
described motivations and background set out here can skip below to
the high-level design itself.

Goals, Caveats, Complementarity to other recently discussed related topics

A main goal is to give people a way to provide route-secure access
to their websites in somewhat the same way that the current
certificate and https protocol infrastructure lets them provide
data-secure access to their website.

I would really like to have a version of this be an offering as part
of obtaining a certificate from Let's Encrypt because I would like it
to encourage people to offer route-secure versions of their sites in
the same way that Let's Encrypt as currently put forth is meant to
encourage them to offer data-secure versions of their sites. Having
this built into something like Let's Encrypt should make it easy for
users to set up onionsites to provide security for their websites.

The design should to be neutral between double onion services
(services where connections involve a Tor circuit from the client and
a Tor circuit from the server, such as the currently deployed design)
and single onion services (basically just having a Tor circuit from
the client). There's a draft Tor Proposal by John Brooks, Roger
Dingeldine, and me on single onion services that I believe John
should be making available soon, but I want to leave such details
aside. I'm taking single onion services as the paradigmatic typical
case, but unless it creates a big problem I would like to assume both
single and double onion services will be compatible.

Obviously an onion service tied to a registered domain doesn't cover
many important uses of onion services, but it should cover many
existing use cases. Note also that wanting to offer network-location
protection for a service can be compatible with having a registered
domain name for that service (and whether or not there was any attempt
to obscure information about the registrant of the domain name). In
some cases it is not compatible, but not necessarily.

I think this is basically complementary to ways to make onion addresses
more readable, recognizable. I'm OK with whatever an address is that will be
acceptable to the RFC and that maintains the current self-authenticating
property (not getting into quibbles about computational strength of that
self-authentication), as long as it remains something that will fit
into a cert as described below.

I'm also leaving as an extension mentioned at the end below, offering
an onion service for someone's site but not associated with a domain
name she has registered.  (E.g. an onionsite tied to Mary's Wordpress
Blog, with, e.g., the goal being more about guarantees of binding to
Mary than about guarantees of binding Wordpress.) I think that is
another important and useful case, which we discussed in our W2SP
paper, but I'd like to mostly leave it aside for now.

High-level Design

Creating the DV Cert

At least the same DV level of checking should occur as for existing
registered domain names. So the email check should include the
onion name that is being bound as well as the route-insecure name(s).
For simplicity, I am assuming a single onion address and possibly
a small number of registered domain names, although I'm guessing doing
this for a similarly small number of onion addresses might be made to work
as well. (I'm assuming no wildcards, but maybe I'm not being ambitious
enough.)

Besides a check at the registered-domain name(s) a check should also
be made that the onionsite verifies association with the
registered-domain site(s). It is not as reasonable to assume
email infrastructure exists corresponding to the onion address is in
place. Instead a validation query protocol will be needed that simply
connects to the onionsite and asks if it is acceptable to certify
association of the onionsite 

Re: [tor-dev] Design for onionsite certification and use with Let's Encrypt

2015-08-24 Thread Paul Syverson
On Mon, Aug 24, 2015 at 02:25:16PM -0400, Paul Syverson wrote:

 If onion keys could be themselves linked in a PGP-like web of trust,

Gah! Too many already used technical terms. By onion key I meant
here private authentication key associated with the .onion address not
private key for authenticating a relay in building a Tor circuit (the
prior, hence proper, usage and last vestige of the actual onions
giving onion routing its name).

Likewise, when I said redirect in my message, I only meant the address
rewritten by the HTTPS Everywhere ruleset and not anything else.

I hope that's all the distracting potential terminological confusions.
Now people can focus just on the confusions based on things I actually
meant to say ;)

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] On new guard algorithms and data structures

2015-08-21 Thread Paul Syverson
Hi Leeroy,

On Fri, Aug 21, 2015 at 08:09:13AM -0400, l.m wrote:
 Hi,
 
 I'm curious what analysis has been done against a gateway adversary.
 In particular dealing with the effectiveness of entry guards against
 such an adversary. There's a part of me that thinks it doesn't work at
 all for this case. Only because I've been studying such an adversary
 at the AS-level and what I see over time is disturbing. Any pointer to
 related material?
 

You may find the following useful. 
http://www.nrl.navy.mil/itd/chacs/biblio/users-get-routed-traffic-correlation-tor-realistic-adversaries

Analysis there is a now few years old, but this is the first attempt
to try to fully consider the sort of question I think you are
asking. This was one of the prompts for the move from three guards to
one, as described in
https://www.petsymposium.org/2014/papers/Dingledine.pdf

There is subsequent related published work on measurement and analysis
of AS and similar adversaries, e.g.,
http://www.degruyter.com/view/j/popets.2015.2015.issue-2/popets-2015-0021/popets-2015-0021.xml?format=INT

Also subsequent work on managing assignment of guards in a practical and
secure manner (although this paper pretty much assumes only relay adversaries).
http://www.degruyter.com/view/j/popets.2015.2015.issue-2/popets-2015-0017/popets-2015-0017.xml?format=INT

This also remains an active area, both for analysis and for AS-aware
route selection. (I haven't put in any pointers to papers on the latter.)

HTH,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Future Onion Addresses and Human Factors

2015-08-08 Thread Paul Syverson
Hi Alec,

On Sat, Aug 08, 2015 at 11:36:35AM +, Alec Muffett wrote:
 Hi All,
 
 Having Beer with Donncha, Yan and others in Berlin a few days ago,
 discussion moved to Onion-Address Human Factors.
 
 Summary points:
 
 1) it’s all very well to go an mine something like
 “facebookcorewwwi” as an onion address, but 16 characters probably
 already exceeds human ability for easy string comparison.
 
 2) example of the above: there are already “in the field” a bunch of
 onion addresses “passing themselves off” as other onion addresses by
 means of common prefixes.
 
 3) next generation onion addresses will only make this worse
 
 4) from Proposal 244, the next generation addresses will probably be
 about this long:
 
 a1uik0w1gmfq3i5ievxdm9ceu27e88g6o7pe0rffdw9jmntwkdsd.onion
 
 5) taking a cue from World War Two cryptography, breaking this into
 banks of five characters which provide the eyeball a point upon
 which to rest, might help:
 
 a1uik-0w1gm-fq3i5-ievxd-m9ceu-27e88-g6o7p-e0rff-dw9jm-ntwkd-sd.onion
 
 6) using underscores would be a pain (tendency to have to use SHIFT
 to type)
 
 7) using dots would pollute subdomains, and colons would cause
 parser confusion with port numbers in URIs
 
 8) being inconsistent (meaning: “we extract the second label and
 expunge anything which is not a base32 character”, ie: that
 with-hyphens and without-hyphens) may help or hinder, we’re not
 really sure; it would permit mining addresses like:
 
   agdjd-recognisable-word-kjhsdhkjdshhlsdblahblah.onion #
   illustration purposes only
 
 …which *looks* great, but might encourage people to skimp on
 comparing [large chunks of] the whole thing and thereby enable point
 (2) style passing-off.
 
 9) appending a credit-card-like “you typed this properly” extra few
 characters checksum over the length might be helpful (10..15 bits?)
 - ideally this might help round-up the count of characters to a full
 field, eg: XXX in this?
 
 
 a1uik-0w1gm-fq3i5-ievxd-m9ceu-27e88-g6o7p-e0rff-dw9jm-ntwkd-sdXXX.onion
 
 10) it might be good to discuss this now, rather than later?
 
 Hence this email, in the hope of kicking off a discussion between
 people who care about human factors.  :-)
 

These are all good points. Not sure you are so much kicking off as
joining some discussions. Maybe you will help pull several discussions
into some sort of coordination. Anyway, I would say that there have
been broadly two approaches to human factors and onion addresses which
the above should complement well.

One is to produce human meaningful names in association with onion
addresses. Coincidentally Jesse has just announce to this same list a
beta-test version of OnionNS that he has been working on for the Tor
Summer of Privacy. See his message or

https://github.com/Jesse-V/OnioNS-literature

The other that I am aware of is to bind onion addresses in a human
meaningful way to existing names, typically registered domain names,
but could be e.g. a Facebook page rather than a domain name per se. A
preliminary description of this can be found in Genuine onion:
Simple, Fast, Flexible, and Cheap Website Authentication. Both paper
and slides pdf can be found under
http://ieee-security.org/TC/SPW2015/W2SP/

I have since that presentation been talking to Let's Encrypt folk and
others about ways to expand and revise the ideas therein. Some of us
have also worked on a related Tor Proposal on single onion services
(aka direct onion services vs. location-hidden aka double onion
services) that would be posted if I could ever find time to just add a
little more to make it self-contained.  We expect to have a revised
and expanded version of the above paper in the not too distant
future. (This week I'll be giving a course about Tor at the SAC Summer
School and the Stafford Tavares Lecture at SAC in New Brunswick, but
will be almost completely skipping onion services since there is
basically infinite amounts of Tor things to talk about.) Picking this
up again significantly in about a week and a half or thereabouts.

aloha,
Paul


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Listen to Tor

2015-05-26 Thread Paul Syverson
On Fri, May 22, 2015 at 04:33:39PM -0600, Kenneth Freeman wrote:
 
 
 On 05/22/2015 04:27 PM, l.m wrote:
  
  So...wouldn't the torified traffic sound like...white noise? I can
  fall asleep to that.
 
 In and of itself a sufficient condition.
 

Safe data gathering nonwithstanding, it would be interesting if there
were actually diagnostic or other information that became salient when
rendered in an auditory modality: higher fraction of highly
interactive (e.g. IRC) traffic?, sudden DDoS underway?, why does it
sound different when Europe wakes up than when California wakes up?,
does that stop happening if a botnet uses Tor for CC?, and does
whiteness (pinkness?) of noise reflect a decent metric for
traffic-security, etc.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services

2015-04-21 Thread Paul Syverson
On Mon, Apr 20, 2015 at 03:18:06PM -0400, A. Johnson wrote:
  I think new users might not appreciate the difference between similarly 
  named terms and then choose the wrong one to their detriment.  It seems 
  better that they should later learn of shared technology that's not clear 
  from the naming differences than be surprised by differences in security 
  properties that they incorrectly assume from similar names.  (Perhaps more 
  generally, the naming should reflect how users---broadly construed---should 
  think about these things rather than the mental models that are useful as 
  developers.)
 
 It is actually for usability that I dislike making unnecessary
 distinctions. “Onion service” makes it simple to clients: xyz.onion
 = service accessible only through Tor.

This may be the central source of our disagreement and underscores the
importance of terminology. I think of onion service as meaning a
service that is reachable only inside of Tor not merely accessible
only through Tor. 

Suppose someone has a sensitive file that they don't want the wrong
people to obtain or obtain before, e.g., an intended public
release. It would be good for them to easily tell whether the server
they're trusting with that file is location protected or
self-authenticated or If both Tor-required and heretofore hidden
services terms are called onion services, then it won't be apparent
simply from the address. (Substitute whatever terms you like for
Tor-required and heretofore hidden, which I'm hoping are
adequately denoted by my usage here.)  And, do we require
self-authentication a la current hidden services for those that we
want to be faster and more convenient if it e.g. would significantly
affect performance?  My point is that if we assume these are all
called 'onion services' we are likely to assume in all kinds of design
requirements or trade-offs without first deciding what we want these
things to do and whether it thus makes sense to bind them together in
this way (or not).  This will then be baked into what 'onion' will mean
and entitle one to assume, even or especially for the users that cannot
articulate this technically. (As an imperfect analogy, think of the
semantics of the lock icon or the green highlighting etc. in URL bars.)
Put differently, whoever's terminological preferences win, we
should get much clearer on these things before we treat this draft as
more than a toy to help us work these out.

aloha,
Paul

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services

2015-04-20 Thread Paul Syverson
On Mon, Apr 20, 2015 at 12:04:24AM +0200, Moritz Bartl wrote:
 Thanks George!
 
 On 04/09/2015 08:58 PM, George Kadianakis wrote:
  - We really really need a better name for this feature. I decided to
go with Direct Onion Services which is the one [...]
 
 Why not simply onion service?

Because we have already started using onion service to cover what we
previously called hidden services until we realized that, among
other things, that term is misleadingly narrow concerning the benefits
such services provide. Cf. e.g., Genuine Onion
http://www.nrl.navy.mil/itd/chacs/syverson-genuine-onion-simple-fast-flexible-and-cheap-website-authentication

My latest thinking about the terminology is that we should call them
something like client side onion service (CSOS, suggested
pronunciation C-sauce). All these terms seem to have limitations of
one form or another, and this is no exception.  This is long when
compared to hidden service but CSOS is not longer to say the
HS. See the comments about pronouncability earlier in this thread.
We could go with client protecting onion service but that doesn't
differentiate it from ordinary onion services, which also protect
clients.  Client only onion service or client oriented onion
service does that (either would be COOS, which is nicely one
syllable---rhymes with 'loose').  We could use clientside onion
service or COS, which could be pronounced see-oh-ess or simply cuz.
This would be as pronounceable as COOS but isn't as direct in
connotation IMO.)

An advantage of using 'side' in the name is that this can generalize
to the obviously complementary server side onion service (SSOS), both
of which are onesided onion services (OSOS). Note that Tor2Web
effectively converts ordinary two-sided onion services to SSOS. Most
of this probably won't see much use unless someone writes a bunch of
papers about them or some serverside use takes off. But I think the
one we're talking primarily about, CSOS or COS, would.  COOS would
also complement server oriented/only onion service (SOOS, in honor of Theo
Geisel ;) but the one sided generalization becomes something like
one side only onion service (OSOOS). Fortunately, as I said, I don't
think we actually need such a term for regular use.

Of these I currently think COOS comes closest to conveying what we
want and balancing out the various goals. And I lean towards the
'oriented' rather than 'only' de-acronymization.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services

2015-04-20 Thread Paul Syverson
On Mon, Apr 20, 2015 at 08:51:59AM -0400, A. Johnson wrote:
  
  Why not simply onion service?
  
  Because we have already started using onion service to cover what we
  previously called hidden services”
 
 Right.
 
  My latest thinking about the terminology is that we should call them
  something like client side onion service (CSOS, suggested
  pronunciation C-sauce).
 
 These suggestions are too long, in my opinion, and a short acronym
 cannot substitute for a short name. Of primary importance, I think,
 is that the name be easy to use and is meaningful. I like the
 [modifier]+onion service” approach, which can be further shortened
 to [modifier]+”service” when the onion context is clear. This
 already works for “hidden service”, which we can more clearly refer
 to using “hidden onion service”. Some options following this pattern
 are

I agree with the above.

   1. fast onion service
   2. exposed onion service
   3. direct onion service
   4. peeled onion service
   5. bare onion service
 

I don't like any of these. They all fall into the same trap as hidden
service or another I'll set out momentarily. I was going to mention
these issues in my last message, but didn't wanted to focus on a
positive suggestion rather than go on too long about why I was
rejecting alternatives.


The problem with fast, direct, and maybe bare is that they
describe some property we're trying to provide with these. Like
hidden, I think the chance that they will evolve or be applied in some
way for which these terms won't apply is too great. Then we'll be
saying things like, When this design was first proposed this was
considered a fast(direct) connection vs. what the previous onion
services design did. We now have foo, which is faster(more direct),
and we're using fast(direct) onion services for application bar which
is not actually very fast(direct) and we don't really care if it's
particularly fast(direct) for this application etc. Think about the
extent to which hidden services are used for other things than serving
up web pages.

The problem with exposed, peeled, and maybe bare is that these
all imply that these are onion services that are diminished in some
way. I can just picture the paper titles and, worse, inflamatory news
headlines the first time someone shows some attack on some aspect of
the design (or more likely on something else entirely on a system that
is configured as such a service). I think anything implying vaguely
lesser onion services is unacceptable.

This is why I'd like to have a name that reflects exactly and only
what the system does, which is require that connections use
Tor. Actually the more I ponder this, the more I return to a point I
raised weeks (months?) ago that I'd just as soon not call it
[modifier] onion service, because if it ends up not having the .onion
domain name or requiring lookup in the HSdir system etc. it will be a
very confusing misnomer vs. what we are now calling onion services. I
thought I had accepted the [modifier] onion service approach, but I'm
going back to my former position.

As we've been discussing, long names are problematic, but the shorter
ones that may evolve can be even more problematic.  We originally
called hidden services location-hidden services which we tied back
to even earlier terminology noting in the original Tor design paper
Rendezvous points are a building block for location-hidden services
(also known as responder anonymity). The shortening to hidden service
was convenient for discussion, but lead to many of the problems we
now face. This is another reason why [modifier] onion service is
problematic; it will almost certainly get shortened in use, just
as location-hidden service did.

I think the best thing would be a neologism that, most importantly,
won't get abused or cause confusion because of some connotation of the
name.  Bonus if it will nonetheless make sense to anyone who knows the
system, and can be explained in a few simple words to anyone who
doesn't.

Here's one suggestion: must-tor service (or mustor service if we want
to be even more compressed. Either can also play off the idiomatic
connotation of only accepting those connections that pass muster, but
that's really secondary).  If someone knows even a little about Tor
(even if they've never heard of onion services or hidden services)
they can maybe guess what the service is about. If they know nothing
about Tor, saying that connections to this service must come through
the Tor network explains the name immediately, even to someone whose
next question is What's the Tor network?.

Note that if mustor services also have .onion addresses I don't see
that as a problem at all. I could explain that too, but I'll stop here
for now.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services

2015-04-20 Thread Paul Syverson
On Mon, Apr 20, 2015 at 01:05:16PM -0400, A. Johnson wrote:
  The problem with fast, direct, and maybe bare is that they
  describe some property we're trying to provide with these. Like
  hidden, I think the chance that they will evolve or be applied in some
  way for which these terms won't apply is too great.
 
 I disagree in general. Hidden service is still a perfectly accurate
 term. “Fast” may have this issue if such services change and take
 advantage of the fact that the server location is known for other
 purposes (e.g. location-based security improvements).

I disagree. Firewall punching service administration may also involve
hiding, which may or may not be viewed as a benefit. But calling this
hidden is not a good description of the most salient or important
aspects of the service.  Similarly for the authenticated services
Griffin and I described in Genuine Onion. Admittedly the latter
currently is largely academic with only a few working examples at the
moment.

 
  The problem with exposed, peeled, and maybe bare is that these
  all imply that these are onion services that are diminished in some
  way.
 
 I can see this with “exposed” (although it actually has the
 advantage of making it clear to the operators that the service is
 *not hidden*). Neither “peeled” nor “bare” seems negative to me.

You haven't seen the headlines in Time Magazine or The Register yet.

 
  because if it ends up not having the .onion
  domain name or requiring lookup in the HSdir system etc.
 
 This doesn’t seem relevant. We are discussing an existing proposal,
 in which the .onion domain and Tor's name resolution service are
 used.

See my last paragraph below.

 
  This is another reason why [modifier] onion service is
  problematic; it will almost certainly get shortened in use, just
  as location-hidden service did.
 
 The obvious and suggested shortening (i.e. omitting the word
 “onion”) works well, in my opinion.

What then was wrong with clientside service, or evn client service?
(Although I am again increasingly sour on the idea of trying to treat
these and heretofore hidden services as somehow broadly the same
animal.)

 
  Here's one suggestion: must-tor service (or mustor service if we want
  to be even more compressed.
 
 This reminds me of the “Tor-required service” suggestion you
 initially made. I dislike like it for the same reasons, the primary
 one of which is that it uses two entirely separate names for
 services that actually will probably indistinguishable to the
 user. That’s why I like the “onion services” umbrella over both
 hidden and fast/direct/exposed/public/peeled/etc.


Thank you for making my point. My fundamental concern is that these
are two separate services with important differences. We will use the
same (or too similar) names for them, and the user will be confused
about which s/he is using. (Note that user here includes the Tor user
setting up the service, not just the one who is the client of that
service. Also those doing subsequent design and analysis are very much
steered by terminological choices anonymity, hidden, etc.)  A
similar point was raised to me after my post earlier today by someone
in a private response. I encouraged reposting the point in public
discussion in this forum, but I haven't heard back or seen that, so
will wait to say more.


A primary motivation here on terminology is to make sure we don't
rush ahead and assume terminology and/or design that runs them
together and therefore decide that the design and/or terminology that
follows this running-together naturally follows. That's viciously
circular justification. 
 
 Best,
 Aaron
 

 ___
 tor-dev mailing list
 tor-dev@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Should popularity-hiding be a security property of hidden services?

2015-04-15 Thread Paul Syverson
Hi George,

Thanks for taking up the challenge I raised to you of coming up with
use cases where leaking popularity is a threat.

Perhaps others have suggested that we don't worry about popularity at
all, but for the arguments I had been trying to make these are straw
men. I don't suggest that we completely ignore popularity.  As one
simple example, if you monitored and published the popularity of onion
services at the level of seconds or minutes (maybe even courser)
adversaries could almost certainly construct practical intersection
attacks on users of some onion services whose client-side traffic was
being monitored. 

You noted anonymity is not binary, but you have only addressed
popularity at a binary level: protect it or ignore it. We have an
unfortunate tendency to sometimes do this in the Tor design
community. For example, any design choice that partitions (or more
generally statistically separates in any way) clients by the portions
of the network about which they've been given information is not even
worthy of consideration because partitioning is just bad. On the other
hand, some pseudonymous profiling by exits is simply acceptable
because of practicality considerations (and indeed, time to keep
opening new connections on existing circuits has recently been
significantly increased in Tor Browser Bundle for usability
reasons---with a bit of discussion, but no significant analysis and no
Tor Proposal). These are just single examples on each side for
contrast, but others are easy to produce. I don't want to get into
addressing the problem of this tendency in general here, I just want
to make sure that we avoid specifically doing that for this problem.

I think I mentioned to you previously the sorts of popularity
statistics I would like to gather. But perhaps I was unclear. I'll set
it out here publicly for others to comment on. Details might change,
and of course we'd have to worry about particular protocols. That's no
different than anything else in Torland.  But I want to assume that
something like the following is basically feasible.  As an argument
from authority, I talked to Aaron a bit about how you might do this
and we were both convinced it should be feasible to do this securely.

So, assume we have an onion service statistics gathering protocol that
outputs say weekly the number of connections and bandwidth carried by
the highest 5 percent, then 10 percent, then 20 percent, then 40
percent, then bottom 60 percent of onion services.  I take it as given
that these would be useful for many reasons, some of which you
cited. We can revisit usefulness as needed.

The question I would like to have answered is what sort of practical
threat could be posed by leaks from this. One could imagine an active
attacker that hugely fluctuates the volume of a given onion service to
determine which bin it had been in assuming very generously that this
isn't covered by noise of other onion services or a very long attack
on a service whose volume does not otherwise change.

These statistics are not a threat in the parkour example. They do
not reveal historical volumes of individual onion sites.

In the dystopian future scenario, the authorities know which hidden
services are run by the rebels but not which ones are popular, and
they want to take down the popular ones quickly since the revolution
is imminent. If they happen to guess the right few they could inflate
the activity (if they can access the onion site) and learn in a week
that they were popular (assuming that they are lucky enough to be sure
that, e.g., noise doesn't obscure that). This is a pretty iffy and low
bang for the buck attack. As a contrasting example, authorities could
easily locate the the guard(s) of targeted onion sites (we're assuming
they can access targeted onionsites) via congestion attacks and then
just monitor the network activity from the guard or its ISP to see the
popularity of targeted onionsites in realtime. Not to mention
deanonymizing anyone they are watching on the client side. This could
be done faster, easier, and more productively than using the
statistics.

Tor is full of security vs. performance and/or efficiency and/or usability
trade-offs. If we're going to rule out any onion service popularity
statistics, I'd like some indication of a realistic potential threat.
So far I don't feel I've heard that.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Renaming arm

2015-03-29 Thread Paul Syverson
On Fri, Mar 27, 2015 at 06:47:19PM -0700, Damian Johnson wrote:
  Did some searching this morning and found another I like almost just
  as well, and might be more fitting: Erebus.
 
 Actually, I'm warming up to Nyx too, which has the advantage of being
 shorter. Surprisingly it too doesn't have much in terms of conflicts
 (mostly just cosmetics and gaming).
 
 Both Nyx and Erebus might make really good names. Curious to hear how
 the community feels about these.
 

Nyx sounds fine to me too (as does Erebus). I assume you have
considered and determined to not be too significant that many people
use *nix (sp?) when they want to refer generically and collectively to
various flavors of Unix-like OSes.  I'm mentioning it just in case
that slipped under the radar, so you are making a conscious decision
that you're happy with. In fact I think this is a point in its
favor. I don't think the common use of *nix hurts because in print I
doubt anyone would use 'Nyx' for what I just described (although check
out the usage collision astronomers had for 'Nyx' and 'Nix') , and in
verbal conversation it is easy to separate context.  (Probably nobody
will be talking about the delousing product ;)

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Performance and Security Improvements for Tor: A Survey

2015-03-13 Thread Paul Syverson
Only glanced through it, but it looks amazingly comprehensive for a 32
page paper (plus references). I haven't read it yet, but a glance
suggests it could be a go-to reference to give to people wanting to
get up to speed on Tor and its current research questions. Congrats!

aloha,
Paul

On Fri, Mar 13, 2015 at 02:01:56PM +0100, Ian Goldberg wrote:
 As I mentioned at the dev meeting, Mashael and I were just finishing up
 a survey paper on Tor performance and security research.
 
 The tech report version was just posted on eprint:
 
 https://eprint.iacr.org/2015/235
 
 for your perusing pleasure.  ;-)
 
 Thanks,
 
- Ian
 ___
 tor-dev mailing list
 tor-dev@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Project Idea for GSOC 2015

2015-02-21 Thread Paul Syverson
Hi Gautham,

On Thu, Feb 19, 2015 at 03:53:00PM +0530, Gautham Nekkanti wrote:
 Hi,
 
 I am Gautham (icodemachine from IRC and TRAC). I am willing to
 participate in GSoC 2015. I was brainstorming for project ideas and thought
 of this useful project idea.
 
 I want to put forward a project idea of Simple analytics tool for HIDDEN
 service providers.

Nice idea.

 Although, there are already thousands of
 third-party traffic statistic tools, most of them require javascript and
 just defeats the whole purpose of server anonymity. This project is a
 little similar to Arm, instead it involves in listening how many users are
 connected to our site and parsing it.
 
 Advanced metrics like IP addresses of visitors, countries, e.t.c. wouldn't
 be available as it is pointless in our case (Since the IPs reflect the IPs
 of exit nodes).

I was going to say that it could still be useful to detect in order to
notice patterns of (mis)behavior from exit relays, also if shared this
could be useful for statistics to help understand the network. The
latter wouldn't be of direct use to the onion service provider but
both would help in general...

But then I realized this reasoning accepts your statement that IPs are
of exit nodes.  Actually, for onion services all connections are
outbound, so IPs are only of guard nodes for the onionsite.

But this little mistake also made me think that sharing of the other
statistics would still be useful for the understanding use of onion
services. This would have to be only if the onionsite operator wanted
to share and should be configured with consideration for privacy and
security. There is existing project work looking at supporting
voluntary indexing by onion services and this could be a nice
complement to that. Playing up that aspect could perhaps up the
interest of potential mentors.

Cool idea in any case,
Paul

 So, it basically displays the number of visitors and a few
 other metrics. Historic data will be presented graphically. The data will
 be accessible through a localhost website and Historic data would be stored
 in a local database. Although, it is not nearly as advanced as other
 statistic tools, it would still be an essential tool for Hidden service
 providers to analyze their traffic figures. How would it be? Please share
 your views.
 
 Thanks,
 Gautham


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] A proposal to change hidden service terminology

2015-02-11 Thread Paul Syverson
On Wed, Feb 11, 2015 at 11:52:37AM +, str4d wrote:
 
 Erik de Castro Lopo wrote:
  A. Johnson wrote:
  
  Several of us [0] working on hidden services have been talking 
  about adopting better terminology.
  
  In general, I am in agreement with this, but I wonder if now might
   be a good time to unify Tor terminology with other similar 
  technologies like I2P and Cjdns/Hyperboria.
 
 It is interesting that you raise this, because we at I2P have been
 thinking the same thing. We discussed the issue of I2P terminology at
 31C3 and decided that after 12 years of Tor/I2P coexistence, Tor had
 the upper hand with regard to commonplace terminology.
 
 In our next release, we are changing most of our user-visible
 tunnel-related terms (I2P destination, tunnel, eepsite etc.) to
 instead use Hidden services in some way [0], to draw parallels to
 Tor hidden services - because as far as an end user is concerned, they
 do pretty much the same thing. And as far as we could tell, hidden
 services is now considered too generic for Tor [1], so it made
 sense to use it generically. Tags are now frozen for the 0.9.18
 release, but we are still open to further discussion about terminology.
 

One of the problem with Hidden Services is that it focuses
exclusively on the hiding. But even existing deployed Tor onion
services provide other functions, such as traffic and site
authentication. Some of the future services we were discussing might
require access come through Tor but not hide the server at all. All of
these different properties are about providing for traffic and
routing, security properties that are commonly understood for data
(e.g., confidentiality,authentication,integrity). Tor, and I think
I2P, tend to focus on traffic and route confidentiality, which is more
commonly called anonymity when thinking about the client and hidden
services when thinking about the server. Thus, for a truly more
general term I would suggest 'traffic security', which is what I have
called this class of security properties for some time.

  
  I have heard someone (forget who) propose that 'Dark Web' be 
  dropped in favour of CipherSpace which could include all of these 
  privacy perserving protocols, leaving terms like OnionSpace for 
  Tor, I2PSpace/EEPSpace for I2P etc.
 
 I am certainly in favor of this kind of collaborative approach. It's
 hard enough already trying to make this stuff understandable to end
 users (usability and UX of the tools themselves aside), without having
 multiple kinda-similar-but-not tools trying to do so in different
 ways. A united concept front would benefit tools _and_ users.
 

+1 

although the current purpose was not to come up with an alternative to
Dark Web since that should die as a misguided attempt to glom
together multiple technical concepts not just a loaded pejorative term
for a clear technical concept. As such it is simply the things that
can be reached by .onion addresses over Tor that we were trying to
improve the terminology for, not the more general properties. Cf. my
above suggestions for my take on that.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Attack Implementations (Master's Thesis: Tor Mixes)

2015-02-09 Thread Paul Syverson
On Sun, Feb 08, 2015 at 11:49:57PM +0200, s7r wrote:
[snip]
  
  On this topic you might also enjoy the paper Sleeping dogs lie on
   a bed of onions but wake when mixed by Paul Syverson: 
  https://petsymposium.org/2011/papers/hotpets11-final10Syverson.pdf
  
 
 Nice paper. Wonder why it isn't in anonbib too. I am used to keep a
 bookmark on anonbib as a central repository of anonymity research
 papers, so there's my concern :-)
 
 I will add a bibtext entry. If anyone else discovers missing papers
 please email me and I will add bibtext entries for them.
 

Thanks for the pointer George. In fact many (most?) of the papers I've
written about onion routing aren't in anonbib. Not sure why that is,
nor why, given some of the other papers by myself and others that are
highlighted as especially important, why arguably the most important
papers I've ever written (the paper introducing onion routing, and the
one where we more fully separated the network from the clients and
destinations) aren't highlighted (or even included in the latter
case).  That's more huh than complaining on my part. If I want it
fixed I should get access and do it myself I suppose (and update my
personal webpage more than once every two years while I'm at it and
other things I haven't put high on priority). In the meantime, you
might look at http://www.onion-router.net/Publications.html for at
least the earlier ones. Cf. also the bibliography of A Peel of Onion
although that doesn't much discuss our mixed latency considerations,
or even cite the alpha-mixing paper, etc. (The latter being in sore need
of an deeper exploration and update along the lines many of us have discussed
but not taken time to rigorously examine or write up. Time, time, gotta run.)

HTH,
Paul 
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] yes hello, internet supervillain here

2014-11-09 Thread Paul Syverson
On Sun, Nov 09, 2014 at 07:25:39PM +, Fears No One wrote:
 I have some news to report, along with more data.
 
 The August DoS attempt appears to have been a crawler bot after all. An
 old friend came forward after reading tor-dev and we laughed about his
 dumb crawler bot vs my dumb must-serve-200-codes-at-everything nginx
 config. His user agent string only accounts for the spike in August, and
 I see no evidence of a mass crawl from it in my log reports. The
 2014-09_24.old file's spike in traffic doesn't match up with his crawl
 times in any way, but he theorizes that somebody else maybe used the
 same crawler package. 

I don't know the exact timing, but 9/24 would line up with HS crawl activity
that was being conducted in association with the kickoff of Sponsor R work
https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorR

HTH,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Potential projects for SponsorR (Hidden Services)

2014-10-23 Thread Paul Syverson
Hi all,

NRL is effectively partnered with the Tor Project Inc. for the
SponsorR efforts.  Our (NRL's) tasking is largely overlapping and
somewhat complementary to that of TPI. As such I thought it would be
good to mention the basics of what we are working on to better inform
and coordinate the planning George et al. have begun discussing
in this thread.

Our task are

1. to identify which statistics about hidden services can be collected
and reported without harming user security.

This is also directly part of TPI's tasking, and I expect we will be
collaborating on this directly. We will be working on this probably
starting in c. a month.

2. to develop passive measurement techniques to measure information
about hidden services. This would, for example, allow the collection
of information about the relative popularity of different types of hidden
services, for example what fraction of hidden service connections are
for highly interactive connections vs. large data downloads vs. etc.
Also developing techniques to infer global activity from local observations.

Some of this has already begun. Roger deployed a month ago on a few
relays testing to see if a connection was for HSes vs. something else.
And we did some initial analysis on the global projection based on
estimation of how much bandwidth those relays saw, which varied wildly,
although there are lots of potential explanations for that.
Roger has also already in this thread touched on some statistics that
are interesting but require thought before deciding how/if to collect
them.

A primary focus of NRL's work between now and the end of the year has
been and will be on devising a secure and accurate relay bandwidth
measurement scheme, with an emphasis on something that should be much
better than what is now available but also practical and compatible
enough that it could be rolled out in Tor w/in c. a year (and we'll
also be considering designs that are less directly implementable but
more theoretically solid). This is one of Tor's biggest current
vulnerabilities. It is pretty easy to get fake inflated BW numbers so
as to have a consensus weight that allows you to observe amounts of
traffic quite disproportionate to the amount you have actually been
carrying in the past. There have been many published attacks based on
bandwidth inflation, and Tor's current torflow design was not intended
to be secure---and could use some accuracy attention as well. This
also becomes important in the context of gathering HS statistics. If
we are going to be deploying statistic gathering code in a way that is
safe for users and hidden services, it is not enough to say what
statistics are safe to honestly collect. We also need to make Tor's
system of data gathering for those statistics robust to abuse. And one
of the easiest ways to abuse statistics gathering to undermine user
and service security is to manipulate BW attribution to increase the
raw data is available to malicious entities. Of course any statistics
that rely on accurate BW measurement will benefit from this work as
well.

3. Designing and testing HS performance improvements, particularly as
they affect the crawling and measuring activities on HSes that SponsorR
is interested in.

Again we expect lots of collaboration in this area, although our focus
will be on the above first.

4. Evaluate planned and future changes to HSes for security and
performance, particularly to see how intended SponsorR measuring,
crawling, and indexing techniques for HSes may be affected. For
example, a technique that assumed directories could know when a new HS
is listed would be affected by design changes in proposal 224.

Same comment as for task 3.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimising Tor node selection probabilities

2014-10-12 Thread Paul Syverson
On Sun, Oct 12, 2014 at 06:43:10AM +1100, teor wrote:
 
 On 11 Oct 2014, at 23:00 , tor-dev-requ...@lists.torproject.org wrote:
 
  Date: Fri, 10 Oct 2014 14:33:52 +0100
  From: Steven Murdoch steven.murd...@cl.cam.ac.uk
  
  I?ve just published a new paper on selecting the node selection
  probabilities (consensus weights) in Tor. It takes a
  queuing-theory approach and shows that what Tor used to do
  (distributing traffic to nodes in proportion to their contribution
  to network capacity) is not the best approach.
  
[snip]
  
  For more details, see the paper:
   http://www.cl.cam.ac.uk/~sjm217/papers/#pub-el14optimising
  
[snip]
 
 
 This is fantastic, Steven - and although we've changed Tor's
 consensus weights algorithm, we still waste bandwidth telling
 clients about relays that wold slow the network down.
 
 Your result further supports recent proposals to remove the slowest
 relays from the consensus entirely.
 

I find this theoretically very interesting and an important
contribution, but I'm less sure what conclusions it supports for Tor
as implemented and deployed. A first major question is that the
results assume FIFO processing of cells at each relay, but Tor
currently uses EWMA scheduling and is now moving even further from
FIFO as KIST is being adopted.  There are other questions, e.g., that
the paper assumes it is safe to ignore circuits and streams (not just
for FIFO vs. prioritized processing but for routing and distribution
of cells across relays as well---or said differently, Tor's onion
routing, but this isn't).  But I'm thinking if I'm correct even about
this one point, then it would be extremely premature to directly apply
the conclusions of this work to practical proposals for improving Tor
performance. Then of course there are those pesky security
implications to worry about ;) My comments are not meant at all to
question the value of the paper, which I think contributes to our
understanding of such networks. Rather I am cautioning against
applying its results outside the scope of its assumptions.

Cf. the KIST paper, which itself cites the EWMA introduction paper and
subsequent related work.
http://www.nrl.navy.mil/itd/chacs/sites/edit-www.nrl.navy.mil.itd.chacs/files/pdfs/14-1231-2094.pdf
or
http://www.robgjansen.com/publications/kist-sec2014.pdf 

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Discussion] 5 ^H 3 hops rendezvous circuit design

2014-02-13 Thread Paul Syverson
On Wed, Feb 12, 2014 at 05:43:10PM -0500, Zack Weinberg wrote:
 On 02/11/2014 11:53 PM, Paul Syverson wrote:
  The biggest concern is that no matter how you handle the commitment
  and the size of the flexible set, you make it fairly easy for a HS
  simply following this protocol precisely and with just the resource of
  a handful of other nodes (n) in the network to identify the client
  guard with certainty each time. (If he owns less than n, it becomes a
  likelihood rather than a certainty each time. Alternatively if he owns
  a few ASes or IXPs he could accomplish similar results with a judicious
  choice of the network location of all n.) Given the push
  elsewhere to use single guards for long periods, this makes guard
  nodes all the more likely to face subpoena or other forms of attack
  since the argument that this is a successful strategy to locate
  clients of interest is greatly strengthened. Since the HS can choose
  two hops that it does not object to, the client should be similarly
  protected, i.e., four relays in the circuit overall.
  
  The other big concern is that this looks like there are many places
  to DoS or resource deplete the hidden service. Earlier designs kept
  per introduction connection state for the HS to a minimum. There may
  be ways to reduce that, but it is an important consideration.
 
 So I have to say that I'm increasingly skeptical about the value of
 guards in general, and in this specific case - where both endpoints are
 emitting nothing but Tor protocol - I'm not convinced guards add
 anything at all, because *even if* both sides happen to pick malicious
 entry points that are controlled by the same adversary, the traffic will
 be indistinguishable from middle-relay traffic!  ... Except, I suppose,
 by traffic correlation to recover the flow and realize that the
 malicious relays are in positions 1 and 3, which in turn means each is
 talking directly to an endpoint.  And then they can do cross-flow
 correlation and figure out which end is the hidden service.  However,
 this kind of long-term correlation attack is *easier* for an adversary
 that controls enough stable nodes that it has a reasonable chance of
 getting picked as a guard by at least one party.  See above skepticism.

Asymptotically guards buy you nothing.  But it's not a long-term
attack that motivated guards. Lasse Overlier and I showed in our 2006
paper Locating Hidden Servers that prior to guards, someone owning a
single relay on the live Tor network of the day (c. 250 relays) could
find a hidden server in a matter of minutes. We were focused on what
could be done with a single relay so could only attack HS circuits. But
we noted that the same effect would apply to all Tor circuits if the
attacker had two or more relays. The next year Bauer et al. in
Low-Resource Routing Attacks Against Tor showed that specifically
(although in simulation rather than on the live Tor network as we
showed).

In our recent CCS paper we showed that guard compromise for reasonable
sized adversary was more on the order of several months than several
minutes. So quite the win.  The time to circuit compromise for typical
usage behaviors once the adversary has a guard is fairly quick. 

My point above was primarily about how a hostile hidden service could
easily learn the guard of the client by simply following the suggested
protocol, and then apply more directed resources on the guard for
circuits of interest to later identify and watch the client and all or
most of his behavior. Without guards, the client is just quickly exposed
directly. Much worse.

 
 That said, these are both really good points, and together, may kill the
 whole concept. :-(  I wonder if we can formalize the problem enough to
 prove one way or another whether you really must have at least four hops?
 

I expect that experimental results will be more compelling than
anything you can prove in this case. I also think that especially for
sensitive users (subject to targeted attacks rather than trawling) the
guards need to be chosen in a trust-aware way, as Lasse and I
suggested and as we have been exploring in more recent papers. Now
once you throw bridges into this mix...

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Discussion] 5 ^H 3 hops rendezvous circuit design

2014-02-11 Thread Paul Syverson
Hi all,

Apologies for top-posting. I'm making a brief comment about general
concerns in this design. Additional apologies if something already
said obviates my comments. I have purposely not looked at most of the
developments in HS design changes because they are _too_ interesting
to me and I have had to make sure to keep my focus on some other
features of Tor where progress is needed.

The biggest concern is that no matter how you handle the commitment
and the size of the flexible set, you make it fairly easy for a HS
simply following this protocol precisely and with just the resource of
a handful of other nodes (n) in the network to identify the client
guard with certainty each time. (If he owns less than n, it becomes a
likelihood rather than a certainty each time. Alternatively if he owns
a few ASes or IXPs he could accomplish similar results with a judicious
choice of the network location of all n.) Given the push
elsewhere to use single guards for long periods, this makes guard
nodes all the more likely to face subpoena or other forms of attack
since the argument that this is a successful strategy to locate
clients of interest is greatly strengthened. Since the HS can choose
two hops that it does not object to, the client should be similarly
protected, i.e., four relays in the circuit overall.

The other big concern is that this looks like there are many places
to DoS or resource deplete the hidden service. Earlier designs kept
per introduction connection state for the HS to a minimum. There may
be ways to reduce that, but it is an important consideration.

aloha,
Paul

On Tue, Feb 11, 2014 at 10:07:20PM -0500, Zack Weinberg wrote:
 On 02/11/2014 11:55 AM, Qingping Hou wrote:
  Hi all,
 
  We are a university research group and following is a proposal for
  possible optimization on Tor's rendezvous circuit design. It's not
  written as a formal Tor proposal yet because I want to have a
  discussion with the community first, get some feedbacks and make
  sure it does not introduce new vulnerabilities.
 
 I am working with Qingping on this design, and we actually think we
 might be able to get the hop count all the way down to three.  Five is
 solid (assuming we can solve the problems that have come up already,
 and assuming there are no further problems) but with some slightly
 cleverer crypto and a more complicated handshake, it's possible for
 each side to prove to the other that it is *not* cheating on the
 connection chain, which is the primary reason we need extra hops.
 
 First off, recall that a single anonymizing proxy --
 
 Client -- Rendezvous -- Service
 
 still guarantees that Client can't learn Service's IP and vice versa,
 and no single-point eavesdropper can learn both Client and Service's
 IPs.  The reason we don't use single-hop circuits in Tor is that the
 rendezvous itself might be malicious (or subverted).
 
 Now, here is the usual three-hop Tor circuit, but for a hidden service:
 
Client -- Client_Guard -- Rendezvous -- Service_Guard -- Service
 
 The additional guard hops protect both Client and Service from a
 malicious third-party Rendezvous, but - please correct me if I'm wrong
 about this - I believe that is *all* they are doing relative to a
 single proxy.  Therefore, if Client could be assured that Service had
 set up its half of the chain honestly - was *not* deliberately
 forfeiting (some of) its own anonymity in order to learn more about
 Client - and vice versa, a three-hop circuit would be sufficient.
 
 Here is a potential way to accomplish this.  It also has the nice
 property of *not* needing Client and Service to agree on the complete
 list of available relays.  One unusual cryptographic primitive is
 required: we need a cipher that is *deterministic* and *commutative* but
 retains all the other desiderata of a message cipher.  These special
 properties are defined as follows:
 
 Deterministic: E_k(M1) == E_k(M2) iff M1 == M2
 Commutative:   D_q(E_k(E_q(M))) == E_k(M)
 
 The Massey-Omura cryptosystem
 https://en.wikipedia.org/wiki/Three-pass_protocol#Massey-Omura_cryptosystem
 has these properties, but there may be others.
 
 In what follows, I will use E_k() and D_k() to refer to this
 cryptosystem specifically.  I will use curly braces { ... } to
 indicate a message which is, as a whole, signed by its sender; these
 messages may in practice also be encrypted, but I *think* this is not
 actually required in any case.
 
 In what follows, Fx (for some x) always refers to the fingerprint of
 some Tor node, and
 
 (0) In advance, Client and Service each pick a set of guards that they
 will use for all communication, as in the present system.
 
 (1) Client and Service somehow agree on a shared secret S.  Client and
 Service also each independently pick a secret which is *not* shared
 with the counterparty: Client's secret will be referred to as A, and
 Service's secret will be referred to as B.
 
 (2) Service picks a guard that it will use for this 

Re: [tor-dev] Proposal 223: Ace: Improved circuit-creation key exchange

2013-11-20 Thread Paul Syverson
On Wed, Nov 20, 2013 at 08:36:30AM -0800, Watson Ladd wrote:
 Is it just me, or is this protocol MQV with the client generating a
 fake long term key?

Well yeah sort of, but the details are crucial. In Improving
efficiency and simplicity of Tor circuit establishment and hidden
services (available on www.syverson.org or the anonbib) Lasse and I
and presented a similar protocol and explicitly described how the
similarity to and basis in MQV was a hopeful indicator that it was
sound. But we didn't do a proper security analysis (in any model) in
that paper, leaving that for future work. These authors found a
vulnerability in that protocol, improved on it, and proved their
protocol secure.

-Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service Scaling

2013-10-09 Thread Paul Syverson
On Wed, Oct 09, 2013 at 09:58:07AM +0100, Christopher Baines wrote:
 On 09/10/13 01:16, Matthew Finkel wrote:

  Then comes the second problem, following the above, the introduction
  point would then disconnect from any other connected OP using the same
  public key (unsure why as a reason is not given in the rend-spec). This
  would need to change such that an introduction point can talk to more
  than one instance of the hidden service.
 
  
  It's important to think about the current design based on the assumption
  that a hidden service is a single node. Any modifications to this
  assumption will change the behavior of the various components.
 
 The only interactions I currently believe can be affected are the Hidden
 Service instance - Introduction point(s) and Hidden Service instance
 - directory server. I need to go and read more about the latter, as I
 don't have all the information yet.

Indeed. Lots of issues there.

 
  These two changes combined should help with the two goals. Reliability
  is improved by having multiple OP's providing the service, and having
  all of these accessible from the introduction points. Scalability is
  also improved, as you are not limited to one OP (as described above,
  currently you can also have +1 but only one will receive most of the
  traffic, and fail over is slow).
   
  Do you see any disadvantages to this design?
 
 So, care needs to be taken around the interaction between the hidden
 service instances, and the introduction points. If each instance just
 makes one circuit, then this reveals the number of instances.

You said something similar in response to Nick, specifically you said

   I believe that to mask the state and possibly number of instances,
   you would have to at least have some of the introduction points
   connecting to multiple instances.

I didn't understand why you said this in either place. Someone would
have to know they had a complete list of introduction points to
know the number of instances, but that would depend on how HS descriptors
are created, stored, and distributed. From whom is this being hidden?
You didn't state the adversary. Is it HS directory servers, intro point
operators, potential clients of a hidden service?  I don't see why
any of these necessarily learns the state or number of instances
simply because each intro point is chosen by a single instance
(ignoring coincidental collisions if these choices are not coordinated).

Also, in your response to Nick you said that not having instances share
intro points in some way would place an upper bound on the number of
instances. True, but if the number of available intro points  likely
number of instances, this is a nonissue. And come to think of it,
not true: if the instances are sometimes choosing the same intro points
then this does not bound the number of instances possible (ignoring
the number of HSes or instances for which a single intro point can
serve as intro point at one time).

Also, above you said If each instance just makes one circuit. Did
you mean if there is a single intro point per instance? 

Hard to say specifically without exploring more, but in general I
would be more worried about what is revealed because circuits are
built to common intro points by different instances and the intro
points can recognize and manage these, e.g., dropping redundant ones
than I would be because the number of intro points puts an upper
bound on instances.

HTH,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Global semi-passive adversary: suggestion of using expanders

2013-08-23 Thread Paul Syverson
On Fri, Aug 23, 2013 at 03:45:31AM -0400, Roger Dingledine wrote:
 On Fri, Aug 23, 2013 at 09:19:32AM +0200, Paul-Olivier Dehaye wrote:
  The short summary of the weakness of Tor here:
  - We would like the whole protocol to be mixing (to an observer, the
  probability of exiting at any node C given entrance at node A is close to
  1/N),
 
 Right, you're using terminology and threat models from the mixnet
 literature. Tor doesn't aim to (and doesn't) defend against that.
 
 You might find the explanation in
 https://blog.torproject.org/blog/one-cell-enough
 to be useful. The first trouble with mixing in the Tor environment is
 that messages from each user aren't the same size, and it's really
 expensive to make them the same size (round up to the largest expected
 web browsing session).
 
 Another key point: it's not about the paths inside the network -- it's
 about the connections from the users to the network, and from the network
 to the destinations.
 
 That said, for the beginning of your related work, see
 http://freehaven.net/anonbib/#danezis:pet2003
 
 And for a much later follow-up, see
 http://freehaven.net/anonbib/#topology-pet2010
 

You might also want to look at the following for a design that tries
to address your issues.
http://freehaven.net/anonbib/#active-pet2010
See also citations therein for partial solutions.

High-order bit: I think this is about state-of-the-art for this area,
and it's my paper, but we still need a lot of basic research progress
in this space before we would have anything worth putting into Tor.
And, except for adding small amounts of noise (besides uniform cell
sizes, but that should be a gauge of tolerable overhead for anything
we do) to complicate trawling, I'm not very sanguine about the
prospects of this ever making practical sense. You might also consult
my Why I'm not an Entropist
http://www.syverson.org/entropist-final.pdf

aloha,
Paul


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Remote anonymous browsing

2013-04-16 Thread Paul Syverson
On Tue, Apr 16, 2013 at 07:35:45PM +, adrelanos wrote:
I think that having a web server to handle Tor requests would defeat the
  purpose of obfuscation because the server's IP address would be public and
  censors could easily block any connections to it rendering it useless.
 
 It's not so easy if users host their own torified CGIproxies on their
 own servers. - Ok, how many users are technically able and willing to do
 that?

We called this remote-proxy access in Anonymous Connections and Onion
Routing https://www.onion-router.net/Publications.html#JSAC-1998
(Somebody should really put the early onion routing papers on anonbib,
almost none of them are there. Copious free time and all that I
guess.)  It could be useful in some circumstances. I imagine if you
wanted to run your own for personal use as suggested, coupling it with
some kind of onetime authentication would be especially useful.
But I can imagine circumstances where it would be useful for general
public use, with the caveat that people will even more poorly understand
the risks and protections they have here than they do with a Tor
client running locally. Similar issues have been considered for tor2web,
which I assume you know about. (If not you should take a look, although
the goals are not identical.)

-Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Remote anonymous browsing

2013-04-16 Thread Paul Syverson
On Wed, Apr 17, 2013 at 12:46:17AM +, Matthew Finkel wrote:
 
 4) Who do you trust? With this remote-proxy, it really depends on what
 you're looking to gain from using the Tor network. Are you looking for a
 censorship circumvention tool? Then you probably don't want to use a
 remote-proxy node run by the censorer or any of it's allies. If you're
 looking to remain anonymous...well, anonymous with respect to whom,
 I suppose?

Actually, if you could log in remotely to an interface that isn't
obviously a gateway to Tor and the proxy/bridge there was one that you
ran yourself or otherwise trusted, this could be an easy way to make
sure your transport didn't look like it was talking a Tor protocol
(because it wouldn't be talking Tor protocol).  That's just off the
top of my head, but the point is that there could be scenarios where
this could support circumvention as well as anonymity.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tenative schedule for Tor 0.2.4 [Or, why I might reject your patches in November-March]

2012-09-07 Thread Paul Syverson
Minor typo noted.
-Paul

On Fri, Sep 07, 2012 at 12:09:30PM -0400, Nick Mathewson wrote:
 Hi, all!
 
 Last year, I announced a tenative schedule for 0.2.3.x.  We didn't
 stick to it, and things have gone a little pear-shaped with getting
 0.2.3.x stabilized, but I think with a few tweaks we can use something
 similar to get a good schedule out for 0.2.4.x.
 
 My goals remain about what they were before: get release out faster by
 getting better at saying no to features after a release window.  My
 target is March or April 2013.
 
 To that end:
 
 October 10, 2012: Big feature proposal checkpoint.  Any large
 complicated feature which requires a design proposal must have its
 first design proposal draft by this date.
 
 November 10, 2012: Big feature checkpoint.  If I don't know about a
 large feature by this date, it might have to wait.
 November 10, 2012: Big feature proposal freeze. Any small feature

s/small/big/

 which requires a design proposal must have its design proposal
 finished by this date.
 
 December 10, 2012: Big feature merge freeze. No big features will be
 merged after this date.
 December 10, 2012: Small feature proposal freeze. Any small feature
 which requires a design proposal must have its design proposal
 finished by this date.
 
 January 10, 2013: Feature merge freeze. No features after this date. I mean 
 it.
 
 Feb 20, 2013: Buggy feature backout date. Any feature which seems
 intractably buggy by this date may be disabled, deprecated, or removed
 until the next release.
 
 On the meaning of feature: I'm probably going to argue that some
 things that you think are bugfixes are features.  I'm probably going
 to argue that your security bugfix is actually a security feature.
 I'm probably even going to argue that most non-regression bugfixes are
 features.  Let's try to get a release out *early* in 2013 this time,
 come heck or high water.
 
 (This is all subject to change, but let's not push it.)
 
 [0] https://lists.torproject.org/pipermail/tor-dev/2011-July/002851.html
 
 happy hacking,
 -- 
 Nick
 ___
 tor-dev mailing list
 tor-dev@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Next ten Tor Tech Reports

2012-08-09 Thread Paul Syverson
On Thu, Aug 09, 2012 at 08:29:25AM +0200, Karsten Loesing wrote:
 Hi Mike,
 
 On 8/8/12 8:13 PM, Mike Perry wrote:
  
  Since HotPETS doesn't count as publishing perhaps this should be
  listed as a tech report:
  http://fscked.org/talks/TorFlow-HotPETS-final.pdf
 
 I agree.  If it counted as publishing, we'd put it on anonbib.  But
 since that's not the case, let's put it on our tech reports list, or
 nobody will find it.

Wait. What!? Since when did anonbib get restricted to what is
published?  Paul Karger's MITRE tech report is there. I mean Wei
Dai's pipenet mailinglist post is there! There are probabily others. I
just mentioned two I knew off the top of my head. I assume that papers
are on anonbib because they've appeared somewhere that one can point
at consistently and they're relatively important, not because they are
published. Published is a useful fiction I'll come back to, but
I don't see why anonbib has to be hamstrung by it.

 
 The only thing I'm worried about is that we shouldn't add reports
 published by other organizations (here: HotPETs) to the Tor Tech Reports
 list.  I'd rather want us to turn your HotPETs report into a Tor Tech
 Report with identical content and put that on the list.


HotPETs being labeled not published is just one of the many never
actually solid but increasingly shaky distinctions trying to cope with
the overloaded semantics and quickly evolving meaning of 'published',
wherein 'published', 'refereed', 'archived', 'produced by a recognized
publisher/organization', 'made available for purchase', etc. were all
taken as synonymous (except when they weren't). Most academic research
venues in computer security don't accept things that are already
published or under consideration to be published. (For convenience I
will completely pretend journals don't exist in this, which for most
of science is like saying you'll pretend published research does not
exist.)  But presenting the work at a workshop wouldn't be
publication, even if presented works were printed out and made
available to attendees. Putting out a tech report wasn't generally
viewed as published (except for patent purposes (which was one of the
motivations for places to have tech reports) but then so did
presentation at a public meeting---now define 'public', how many
epicycles are we up to?).  Now jump forward a few decades or so and
all of these are on the web. How can you tell if some web page talking
about a meeting and providing links to pdfs of what was presented
there (possibly produced before or afterwards or both) counts as
published?  HotPETs wants to get half-baked innovative stuff, not just
the work after all the proofs have been done, full implementation and
simulations run, whatever. (It also _will_ take stuff that has been
published elsewhere.) But if it counted as published, authors
couldn't submit it to a venue that does count once the details were a
bit more worked out (and counts in the eyes of tenure committees,
funding agencies, etc. in a way HotPETs does not). So, HotPETs labels
its works as not published because you need to tell people which side
of this nonexistent line the work is on, so they know what to do next.

 
 How about we put the LaTeX sources in tech-reports.git, change them to
 use the new tech report template, assign a report number, and add a
 footnote saying This report was presented at 2nd Hot Topics in Privacy
 Enhancing Technologies (HotPETs 2009), Seattle, WA, USA, August 2009.?
  Then people can decide if they rather want to cite our tech report or
 the HotPETs one.

This is pretty standard for tech reports at many universities,
organizations, etc. Also I think, stuff on arxiv.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Analysis of the Relative Severity of Tagging Attacks

2012-04-04 Thread Paul Syverson
On Sun, Mar 11, 2012 at 10:28:04PM +, The23rd Raccoon wrote:
   Analysis of the Relative Severity of Tagging Attacks:
  Hey hey, ho ho! AES-CTR mode has got to go!
 
 
   A cypherpunk riot brought to you by:
The 23 Raccoons
 
 
 
 
 Abstract
 
 Gather round the dumpster, humans. It's time for your Raccoon
 overlords to take you to school again.
 
 Watch your step though: You don't want to catch any brain parasites[0].
 
 
 Introduction
 
 For those of you who do not remember me from last time, about 4 years
 ago I demonstrated the effect that the Base Rate Fallacy has on timing
 attacks[1]. While no one has disputed the math, the applicability of
 my analysis to modern classifiers was questioned by George Danezis [2]
 and others. However, a close look at figure 5(a) of [3] shows it to be
 empirically correct[4].
 
 Recently, Paul Syverson and I got into a disagreement over the
 effectiveness of crypto-tagging attacks such as [5].

I just wanted to let you know that I'm neither ignoring nor forgetting
about this thread and its ilk, just insanely busy with other things.
It's now clear that I'm unlikely to pick up thinking about this topic
again until c. mid-May. Sorry for any inconvenience/annoyance my
dropping out of and into the discussion may cause.

aloha,
Paul

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] A concrete proposal for crypto (at least part of it)

2011-11-02 Thread Paul Syverson
On Wed, Nov 02, 2011 at 01:19:52PM -0500, Watson Ladd wrote:
 On Wed, Nov 2, 2011 at 11:45 AM, Robert Ransom rransom.8...@gmail.com wrote:
  On 2011-11-02, Watson Ladd watsonbl...@gmail.com wrote:
  Dear All,
 [...omitted..]
 
  Right now Tor encrypts the streams of data from a client to a OR with
  AES-CTR and no integrity checks.
 
  Bullshit. We have a 32-bit-per-cell integrity check at the ends
  of a circuit.
 So let's say that I am a malicious 1st hop and a malicious 3rd hop,
 and I want to find out. If I have known plaintext I can modify it,
 say the packet type headers.  Then the third router will see
 nonsense and know that it this circuit is compromised. The second
 router can detect this with my proposal, it cannot right now. Ends
 of circuit alone are not enough.

There may be other virtues to integrity checks besides at the end
nodes, but this example is not compelling. All our experiments and
analyses have indicated that it is trivial for end nodes to know when
they share a circuit. You mention an active adversary, but it is
trivial for such an adversary to put a timing signature on traffic
detectable at the other end---trivial but unnecessary. My own work
showed a passive adversary is sufficient, and Bauer et al. showed that
you don't even need to pass application data cells: circuit setup is
enough.  Despite extensive research, nobody has yet come up with a
padding/dropping scheme to resist a passive, let alone active, adversary
adequate and practical enough to consider implementing and deploying.

aloha,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev