Re: [tor-dev] Mostly Automatic Censorship Circumvention in Tor Browser

2021-07-08 Thread Tom Ritter
> ## Circumvention Settings Map

Do we ever see FallbackDirs censored but relays not? Not sure if that's useful.

It seems like this entire data structure could be condensed into a
very small format (2 bytes per country; maybe even 1 byte if you
dropped a few things). 2 bytes per country-name; 4 countries and right
now it's 16 bytes. That's small enough to fit in a standard 256-bit
random nonce or counter, smuggled inside a cryptographic protocol like
TLS. Or put into a DNS TXT record.

>  Time Investment to Update Map

Updating such a file is the exact purpose of the Remote Settings
feature of Firefox:
https://firefox-source-docs.mozilla.org/services/settings/index.html

I'm sure Tor is loath to run additional infrastructure on top of the
update server but... this exists.  I think one Remote Settings bucket
is enabled by Tor Browser? Maybe OneCRL?  (Or maybe I'm thinking of
Addon-Blocklist which I don't think is Remote Settings)

Mozilla might be able to host this _for_ Tor (and Tor devs have the
admin control on the bucket) but obviously that would allow some level
of control of Tor Browser by Mozilla.

>  Are Per-Country Entries Granular Enough?

It seems like today this problem is not worth trying to address. Seems
difficult, and has it ever actually been needed?

> ## Determining User Location

Doesn't tor ship with a geoip database that can do this given the
user's internet-facing IP with some but not perfect accuracy?

-tom

On Thu, 8 Jul 2021 at 12:22, Richard Pospesel  wrote:
>
> Hi Everyone,
>
> As part of our Sponsor 30 work, we are looking to improve the new
> about:torconnect experience by adding automatic tor settings
> configuration for censorship circumvention.
>
> This document outlines and discusses the *technical* challenges
> associated with this work, and does not go into any great detail on the
> right UX would be (in terms of easy of use, user trust, etc).
>
> Anyway, if you see any pitfalls or problems with anything here, do let
> us know.
>
> --8<--
>
> # Mostly Automatic Bridge Configuration to Bypass Internet Censorship
>
> Our goal for this work is to enable Tor Browser users to access tor
> without having to navigate to about:preferences#tor to configure
> bridges. Technically speaking, this is a trivial problem assuming you know:
>
> - which bridge settings work at the user's location
> - the location of the user
>
> ## Circumvention Settings Map
>
> For now, it seems sufficient to maintain a map of countries to some
> data-structure containing information about which censorship
> circumvention techniques work and which ones do not. A proposed example
> format can be found here:
>
> -
> https://gitlab.torproject.org/tpo/anti-censorship/state-of-censorship/-/blob/main/state-of-censorship.json
>
> This map would at be distributed and updated through tor-browser releases.
>
> ### Problems
>
>  Censorship Changes Invalidate the Map
>
> The obvious problem with distributing the censorship-circumvention
> settings map with Tor Browser is that if the techniques used in a
> location change such that old settings no longer work, you will be left
> with a non-functional Tor Browser with no way to update it apart from
> acquiring a fresh install with the updated settings or by manually
> configuring Tor Browser's bridge settings (so what users have to do now)
>
> A fix for this would be to provide a rules update mechanism whereby
> updated rules could be fetched outside of tor (via the clearnet, or over
> moat). Special care would need to be taken to ensure the rule updates
> from this automatic mechanism actually came from the Tor Project (via
> some sort of signature verification scheme, for example).
>
> Another wrinkle here is that rules would also need to be distributed
> somewhere that is difficult to censor. It seems likely that we may need
> different locations and mechanisms for acquiring the rule-set based on
> the user's location.
>
> Whatever the mechanism, updates should happen at least before the user
> attempts to auto-configure. Otherwise, perhaps we should periodically
> auto-update the the settings at a reasonable cadence.
>
>  Time Investment to Update Map
>
> Another problem with solely distributing the rules through Tor Browser,
> is that censorship events would now require a Tor Browser release just
> to push new rules out to people. Publishing new Tor Browser releases is
> not a simple task, and enabling adversaries to force Tor Browser
> releases by tweaking their censorship systems seems like a cute way to
> DDOS the Applications team.
>
> An alternate update channel is definitely necessary outside of periodic
> Tor Browser releases.
>
>  Are Per-Country Entries Granular Enough?
>
> One could imagine highly localized censorship events occurring which
> require special settings that are not needed in the rest of the country.
> For instance, if there is a clearnet blackout in 

Re: [tor-dev] Optimistic SOCKS Data

2019-10-11 Thread Tom Ritter
On Thu, 10 Oct 2019 at 10:37, George Kadianakis  wrote:
> So are you suggesting that we can still do SOCKS error codes? But as
> David said, some of the errors we care about are after the descriptor
> fetch, so how would we do those?

Only 'X'F3' Onion Service Rendezvous Failed' - right?

I think David is proposing we just don't do that one because in his
experience it's pretty rare.

> Also, please help me understand the race condition you refer to. I tried
> to draw this in a diagram form:
>   https://gist.github.com/asn-d6/55fbe7a3d746dc7e00da25d3ce90268a

I edited this:
https://gist.github.com/tomrittervg/e0552ed007dbe50077528936b09a2eff

Whose first diagram (normal situation) is correct?  Something looks
off in yours... You obviously know tor much better; but latency gain
for optimistic socks doesn't come from sending the HTTP GET to tor
sooner, it comes from sending it to the destination sooner - so I
think that the GET must travel to the destination before the
destination replies with CONNECTED, doesn't it?

Anyway, I tried to illustrate Matts race condition as I understand it;
but I am entirely unconcerned with it. There's no way it's going to
take the browser more time to generate a HTTP GET and send it over
SOCKS than it's going to take tor to roundtrip a rendezvous setup.

> IIUC, for onions the advantage of opportunistic SOCKS is that we would
> send DATA to the service right after finishing rendezvous, whereas right
> now we need to do a round-trip with Tor Browser after finishing
> rendezvous. Is that right?
>
> If that's the case, then sending the SOCKS reply after the rendezvous
> circuit is completed would be the same as the current behavior, and
> hence not an optimization, right?

Correct.

> And sending the SOCKS reply after the introduction is completed (as
> David is suggesting) would be an optimization indeed, but we lose
> errors (we lose the rendezvous failed error, which can occur if the
> onion service is under DoS and cannot build new circuits but can still
> receive introductions).

Yup.

> What other problems exist here?

I'll have to think about it more at a time I'm better equipped to.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimistic SOCKS Data

2019-09-27 Thread Tom Ritter
On Mon, 5 Aug 2019 at 18:33, Tom Ritter  wrote:
>
> On Tue, 2 Jul 2019 at 09:23, Tom Ritter  wrote:
> > Or... something else?  Very interested in what David/asn think since
> > they worked on #30382 ...
>
> I never updated this thread after discussing with people on irc.
>
> So the implementation of
> SOCKS-error-code-for-an-Onion-Service-needs-auth implementation is
> done. David (if I'm summarizing correctly) felt that the SOCKS Error
> code approach may not be the best choice given our desire for
> optimistic data; but felt it was up to the Tor Browser team to decide.
>
> In the goal of something that works for 90%+ of use case today, the
> rest later, I'll propose the following:
>
> In little-t tor, detect if we're connecting to an onion site, and if
> so do not early-report SOCKS connection.
>
> Another ugly option is to early-report a successful SOCKS connection
> even for onion sites, and if we later receive an auth request, send an
> HTTP error code like 407 that we then detect over in the browser and
> use to prompt the user. I don't like this because it is considerably
> more work (I expect), horrible ugly layering violations, and I don't
> think it will work for https://onion links.

I attached an updated proposal taking this into account, and I'd like
to request it be entered into torspec's proposals list.

-tom
Filename: xxx-optimistic-socks-in-tor.txt
Title: Optimistic SOCKS Data
Author: Tom Ritter
Created: 21-June-2019
Status: Draft
Ticket: #5915

0. Abstract

   We propose that tor should have a SocksPort option that causes it to lie
   to the application that the SOCKS Handshake has succeeded immediately,
   allowing the application to begin sending data optimistically.

1. Introduction

   In the past, Tor Browser had a patch that allowed it to send data
   optimistically. This effectively eliminated a round trip through the
   entire circuit, reducing latency.

   This feature was buggy, and specifically caused problems with MOAT, as
   described in [0] and Tor Messenger as described in [1]. It is possible
   that the other issues observed with it were the same issue, it is
   possible they were different.

   Rather than trying to identify and fix the problem in Tor Browser, an
   alternate idea is to have tor lie to the application, causing it to send
   the data optimistically. This can benefit all users of tor. This
   proposal documents that idea.

   [0] https://trac.torproject.org/projects/tor/ticket/24432#comment:19
   [1] https://trac.torproject.org/projects/tor/ticket/19910#comment:3

2. Proposal

2.1. Behavior

   When the SocksPort flag defined below is present, Tor will immediately
   report a successful SOCKS handshake subject for non-onion connections.
   If, later, tor recieves an end cell rather than a connected cell, it
   will hang up the SOCKS connection.

   The requirement to omit this for onion connections is because in
   #30382 we implemented a mechanism to return a special SOCKS error code
   if we are connecting to an onion site that requires authentication.
   Returning an early success would prevent this from working.

   Redesigning the mechanism to communicate auth-required onion sites to
   the browser, while also supporting optimistic data, are left to a future
   proposal.

2.2. New SocksPort Flag

   In order to have backward compatibility with third party applications that
   do not support or do not want to use optimistic data, we propose a new
   SocksPort flag that needs to be set in the tor configuration file in order
   for the optimistic beahvior to occur.

   The new SocksPort flag is:

  "OptimisticData" -- Tor will immediately report a successful SOCKS
  handshake subject for non-onion connections and
  hang up if it gets an end cell rather than a
  connected cell.

3. Application Error Handling

   This behavior will cause the application talking to Tor to potentially
   behave abnormally as it will believe that it has completed a TCP
   connection. If no such connection can be made by tor, the program may
   behave in a way that does not accurately represent the behavior of the
   connection.

   Applications SHOULD test various connection failure modes and ensure their
   behavior is acceptable before using this feature. 

References:

[RFC1928] https://www.ietf.org/rfc/rfc1928.txt
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] TBB Memory Allocator choice fingerprint implications

2019-08-21 Thread Tom Ritter
[Replying to both emails]

My hope is that most of this stems from my cursory work in replying
and a general misunderstanding.

I've seen people advocating for replacing the memory allocator in Tor
Browser since I started the effort five years ago here:
https://github.com/iSECPartners/publications/blob/master/reports/Tor%20Browser%20Bundle/Tor%20Browser%20Bundle%20-%20iSEC%20Deliverable%201.3.pdf
My replies were primarily focused on replying to the advocations I've
seen in the past, and the misconceptions I've seen repeated there
(like LD_PRELOAD).

On Wed, 21 Aug 2019 at 12:14, Daniel Micay  wrote:
>
> On Sat, Aug 17, 2019 at 09:17:40PM +0000, Tom Ritter wrote:
> > On Sat, 17 Aug 2019 at 15:06, procmem at riseup.net  
> > wrote:
> > > Question for the Tor Browser experts. Do you know if it is possible to
> > > remotely fingerprint the browser based on the memory allocator it is
> > > using? (via JS or content rendering)
> >
> > Fingerprint what aspect of the browser/machine?
>
> Performance-based fingerprinting of the browser can easily differentiate
> between using a different malloc implementation. That can already obtain
> a lot of fingerprinting information about the hardware and OS so this
> may not actually matter much, but it's entirely possible.


Agreed.


> > > We are thinking of switching Tor Browser to use the minimalist and
> > > security oriented hardened_malloc written by Daniel Micay. Thanks.
> >
> > I wouldn't advise giving up partitioning for what exactly? What
> > features does this allocator have that 68's jemalloc doesn't?
>
> The hardened_malloc allocator heavily uses partitioning, and has a much
> stronger implementation than the very weak approach in mozjemalloc. It
> statically reserves memory regions for metadata and a dedicated region
> for each arena, with each size class receiving a dedicated sub-region
> within the arena. These sub-regions are placed within their own guard
> region and each have a high entropy random base. It never mixes address
> space between these regions or reuses the address space. This is much
> different than what you call 'partitioning' in mozjemalloc which does
> not really qualify. What you're talking about is mozjemalloc exposing an
> API for choosing the arena from the code, which can certainly be done
> with hardened_malloc too. However, in mozjemalloc, the address space for
> different arenas is mixed together and reused between them. It's really
> a stretch to call this partitioning, and it doesn't have the baseline
> separation of size classes either.
>
> People can read about hardened_malloc in the README:
>
> https://github.com/GrapheneOS/hardened_malloc/blob/master/README.md#hardened-malloc
>
> I don't know why you're making the misleading claim that people would
> need to give up partitioning.

My reply about giving up partitioning here (which I clarify in the
second email) is that using LD_PRELOAD will negate partitioning.

> It's also really a stretch to call what
> Mozilla is doing in mozjemalloc partitioning in the first place, so your
> claim is really quite backwards...

Let me start by asserting emphatically and agreeing that
hardened_malloc is a much stronger allocator than mozjemalloc or
jemalloc. PartitionAlloc is also for that matter.

Mozilla's partitioning has always been focused on preventing the
Use-After-Free problems that have plagued Firefox for years. Allocate
a DOM object, retain a pointer to it, free it, replace it with an
Arraybuffer, align the vtable entry, redirect execution. We allocate
ArrayBuffer contents and strings in separate arenas to avoid the
immediate reuse of these bytes.


On Wed, 21 Aug 2019 at 13:40, Daniel Micay  wrote:
>
> On Mon, Aug 19, 2019 at 04:09:36PM +, Tom Ritter wrote:
> > Okay I'm going to try and clear up a lot of misconceptions and stuff
> > here.  I don't own Firefox's memory allocator but I have worked in it,
> > recently, and am one of the people who are working on hardening it.
>
> This makes it clear why you're spreading misinformation. You're going
> out of your way to make false and misleading claims about mozjemalloc
> and hardened_malloc, particularly your bogus comparisons between them.

My comparisons were rushed and cursory. I apologize and I'll clarify
my conclusions at the end of the email.

> Bolting on a few weak implementations of hardening features to an
> allocator inherently very friendly to memory corruption exploitation
> does not make it anything close to being hardened allocator, sorry.

Fair.

> > Firefox's memory allocator is not jemalloc. It's probably better
> > referred to as mozjemalloc. We forked jemalloc and have been improving
> > it (at least from our perspective.) Any analysis of or comparison to
> &

Re: [tor-dev] TBB Memory Allocator choice fingerprint implications

2019-08-20 Thread Tom Ritter
> The only way to guarantee catching early allocator use is to switch
> the system's allocator (ie, libc itself) to the new one. Otherwise,
> the application will end up with two allocator implementations being
> used: the application's custom one and the system's, included and used
> within libc (and other system libraries, of course.)

So I don't know a ton about how this stuff works, but Firefox does
redirect allocations made by system libraries to the mozjemalloc
allocator. I know because I've been fighting with it recently because
it wasn't always doing it for MinGW and it's mismatch the alloc/free.
This is https://bugzilla.mozilla.org/show_bug.cgi?id=1547519 and
dependencies.

> > Fingerprinting: It is most likely possible to be creative enough to
> > fingerprint what memory allocator is used. If we were to choose from
> > different allocators at runtime, I don't think that fingerprinting is
> > the worst thing open to us - it seems likely that any attacker who
> > does such a attack could also fingerprinting your CPU speed, RAM, and
> > your ASLR base addresses which depending on OS might not change until
> > reboot.
>
> My post was more along the lines of: what system-level components, if
> replaced, have a potentially visible effect on current (or future)
> fingerprinting techniques?

I imagine that we have not seen the limit of creativity when it comes
to fingerprinting hardware characteristics of the user's machine.
These would include graphics card performance, CPU performance, cache
sizes (CPU and RAM), FPU operation (?), perhaps even disk speed.
Allocator, sure too.

> And: If, or how, does breaking monocultures affect fingerprinting?
> Breaking monocultures is typically done to help secure an environment
> through diversity, causing an attacker to have to spend more resources
> in quest for success.



> > The only reason I can think of to choose between allocators at runtime
> > is to introduce randomness into the allocation strategy. An attacker
> > relying on a blind overwrite may not be able to position their
> > overwrite reliably AND it has the cause the process to crash otherwise
> > they can just try again.
> >
> > Allocators can introduce randomness themselves, you don't need to
> > choose between allocators to do that.
>
> I'm assuming you're talking about randomness of the address space?

No, randomization of the allocations.

Imagine a simplistic example of grabbing 1MB of memory, and requesting
3 allocations of 100KB each.
In a deterministic allocator you'll always have the allocations at
, , 
In a randomized allocator the allocations could be at ,
, 

This removes determinism for the attacker in laying out the heap
exactly how they want it.

As I mention below, this randomness is easily bypassed in the content
process (where the attacker has a JIT engine to work with) and may
provide some security on the other side of an IPC boundary.

> > In virtually all browser exploits we have seen recently the attacker
> > creates exploitation primitives that allow partial memory read/write
> > and then full memory read/write. Randomness introduced is bypassed and
> > ineffective. I've seen a general trend away from randomness for this
> > purpose. The exception is when the attacker is heavily constrained -
> > like exploiting over IPC or in a network protocol. Not when the
> > attacker has a full Javascript execution environment available to
> > them.




> > In conclusion, while it's possible hardened_malloc could provide some
> > small security increase over mozjemalloc, the gap is much smaller than
> > it was when I advocated for allocator improvements 5 years ago, the
> > effort is definitely non-trivial, and the gap is closing.
>
> I'm curious about how breaking monocultures affect attacks. I think
> supporting hardened_malloc (or )
> would provide at least the framework for academic exercises.

At Mozilla in the past we have evaluated exploit mitigations by hiring
an exploit developer to write or adapt an exploit to bypass a
mitigation and give us their opinion. The replace_malloc framework is
effectively the framework for performing such an evaluation.

Exploits have become more and more frameworked. They abstract away the
exploitation primitives and write the exploits against an API. Then
for each vulnerability they construct the same primitives using
different or slightly different techniques and use mostly the same
exploit.

'Breaking the monoculture' to me feels like "The attacker doesn't know
X so they have to guess and they might guess wrong and lose their
ability to exploit."  This assumes a) they have to guess and b) they
lose their ability to exploit.

(a) does not seem true. When they have a JIT to work with, they can
almost always safely inspect the system before taking any risks.
(b) also does not seem true. Reading memory is fairly safe so the
probability of crashing is low.

I think there is *significant* advantage to trying new approaches and
experimenting. 

Re: [tor-dev] TBB Memory Allocator choice fingerprint implications

2019-08-19 Thread Tom Ritter
Okay I'm going to try and clear up a lot of misconceptions and stuff
here.  I don't own Firefox's memory allocator but I have worked in it,
recently, and am one of the people who are working on hardening it.

Firefox's memory allocator is not jemalloc. It's probably better
referred to as mozjemalloc. We forked jemalloc and have been improving
it (at least from our perspective.) Any analysis of or comparison to
jemalloc is - at this point - outdated and should be redone from
scratch against mozjemalloc on mozilla-central.

LD_PRELOAD='/path/to/libhardened_malloc.so' /path/to/program will do
nothing or approximately nothing. mozjemalloc uses mmap and low level
allocation tools to create chunks of memory to be used by its internal
memory allocator. To successfully replace Firefox memory allocator you
should either use LD_PRELOAD _with_ a --disable-jemalloc build OR
Firefox's replace_malloc functionality:
https://searchfox.org/mozilla-central/source/memory/build/replace_malloc.h

Fingerprinting: It is most likely possible to be creative enough to
fingerprint what memory allocator is used. If we were to choose from
different allocators at runtime, I don't think that fingerprinting is
the worst thing open to us - it seems likely that any attacker who
does such a attack could also fingerprinting your CPU speed, RAM, and
your ASLR base addresses which depending on OS might not change until
reboot.

The only reason I can think of to choose between allocators at runtime
is to introduce randomness into the allocation strategy. An attacker
relying on a blind overwrite may not be able to position their
overwrite reliably AND it has the cause the process to crash otherwise
they can just try again.

Allocators can introduce randomness themselves, you don't need to
choose between allocators to do that.

In virtually all browser exploits we have seen recently the attacker
creates exploitation primitives that allow partial memory read/write
and then full memory read/write. Randomness introduced is bypassed and
ineffective. I've seen a general trend away from randomness for this
purpose. The exception is when the attacker is heavily constrained -
like exploiting over IPC or in a network protocol. Not when the
attacker has a full Javascript execution environment available to
them.

When exploiting a memory corruption vulnerability, you can target the
application's memory (meaning, target a DOM object or an ArrayBuffer)
or you can target the memory allocator's metadata. While allocator
metadata corruption was popular in the past, I haven't seen it used
recently.




Okay all that out of the way, let's talk about allocators.

I skimmed https://github.com/GrapheneOS/hardened_malloc and it looks
like it has:
 - out of line metadata
 - double free protection
 - guard regions of some type
 - zero-filling
 - MPK support
 - randomization
 - support for arenas

mozjemalloc:
 - arenas (we call them partitions)
 - randomization (support for, not enabled by default due to limited
utility, but improvements coming)
 - double free protection
 - zero-filling
In Progress:
 - we're actively working on guard regions
Future Work:
 - out of line metadata
 - MPK

harden_malloc definitely has more bells and whistles than mozjemalloc.
But the benefit gained by slapping in an LD_PRELOAD and calling it a
day is small to zero. Probably negative because you'll not utilize
partitions by default. You'd need a particurally constrained
vulnerability to actually prevent exploitation - it's more likely
you'll just cost the attacker another 2-8 hours of work.

Out of line metadata is on-the-surface-attractive but... that tends to
only help when you have a off-by-one/four write and you corrupt
metadata state because it's the only thing you *can* do. With out of
line metadata, you can just corrupt a real object and effect a
different type of corruption. I'm pretty skeptical of the benefit at
this point, although I could be convinced. We don't see metadata
corruption attacks anymore - but I'm not sure if it's because we find
better exploit primitives or better vulnerabilities.

In particular, if you wanted to pursue hardened_malloc you would need
to use replace_malloc and wire up the partitions correctly.
Randomization will almost certainly not help (and will hurt
performance)*. MPK sounds nice but you have to use it correctly (which
requires application code changes), you have to ensure there are no
MPK gadgets, and oh wait no one can use it because it's only available
in Linux on server CPUs. =(

* One place randomization will help is on the other side of an IPC
boundary. e.g. in the parent process. I'm trying to get that enabled
for mozjemalloc in H2 2019.

In conclusion, while it's possible hardened_malloc could provide some
small security increase over mozjemalloc, the gap is much smaller than
it was when I advocated for allocator improvements 5 years ago, the
effort is definitely non-trivial, and the gap is closing.

If people had the cycles to invest in 

Re: [tor-dev] TBB Memory Allocator choice fingerprint implications

2019-08-17 Thread Tom Ritter
On Sat, 17 Aug 2019 at 15:06, proc...@riseup.net  wrote:
> Question for the Tor Browser experts. Do you know if it is possible to
> remotely fingerprint the browser based on the memory allocator it is
> using? (via JS or content rendering)

Fingerprint what aspect of the browser/machine?

> We are thinking of switching Tor Browser to use the minimalist and
> security oriented hardened_malloc written by Daniel Micay. Thanks.

I wouldn't advise giving up partitioning for what exactly? What
features does this allocator have that 68's jemalloc doesn't?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimistic SOCKS Data

2019-08-05 Thread Tom Ritter
On Tue, 2 Jul 2019 at 09:23, Tom Ritter  wrote:
> Or... something else?  Very interested in what David/asn think since
> they worked on #30382 ...

I never updated this thread after discussing with people on irc.

So the implementation of
SOCKS-error-code-for-an-Onion-Service-needs-auth implementation is
done. David (if I'm summarizing correctly) felt that the SOCKS Error
code approach may not be the best choice given our desire for
optimistic data; but felt it was up to the Tor Browser team to decide.

In the goal of something that works for 90%+ of use case today, the
rest later, I'll propose the following:

In little-t tor, detect if we're connecting to an onion site, and if
so do not early-report SOCKS connection.

Another ugly option is to early-report a successful SOCKS connection
even for onion sites, and if we later receive an auth request, send an
HTTP error code like 407 that we then detect over in the browser and
use to prompt the user. I don't like this because it is considerably
more work (I expect), horrible ugly layering violations, and I don't
think it will work for https://onion links.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimistic SOCKS Data

2019-07-02 Thread Tom Ritter
On Tue, 2 Jul 2019 at 13:42, Mark Smith  wrote:
>
> On 6/21/19 8:50 PM, Tom Ritter wrote:
> > The attached is a draft proposal for allowing tor to lie to an
> > application about the SOCKS connection enabling it to send data
> > optimistically.
> >
> > It's going to need some fleshing out in ways I am not familiar with,
> > but I wanted to get something out to start as we think that this is
> > probably the best path forward for bringing back Tor Browser's
> > optimistic SOCKS behavior.
>
> I am not sure what to do about it, but I think the approach you describe
> will break the method that Tor Browser just started to use to detect
> that an onion service requires client authentication (see
> https://trac.torproject.org/projects/tor/ticket/3 and associated
> child tickets). The tldr is that we rely on receiving a new error code
> from the SOCKS connect request.

Hm, yes.

We could not use optimistic data for onions...

Or instead of using a SOCKs error code we could return a special type
of error (encapsulated in a HTTP response) recognizable by Tor
Browser. Something like "If the response to an onion request is status
code 407 Proxy Authentication Required (or 4xx whatever) then the
Browser should prompt for onion service client authentication and
retry the request with that."

Or... something else?  Very interested in what David/asn think since
they worked on #30382 ...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimistic SOCKS Data

2019-06-30 Thread Tom Ritter
I'll add that a cypherpunk has been testing a very simple patch
implementing this behavior for a few months and not seen adverse
effects: 
https://trac.torproject.org/projects/tor/attachment/ticket/5915/tor-optimistic-data.patch
(Although I propose to not include the error page component.)

-tom

On Sat, 22 Jun 2019 at 00:50, Tom Ritter  wrote:
>
> The attached is a draft proposal for allowing tor to lie to an
> application about the SOCKS connection enabling it to send data
> optimistically.
>
> It's going to need some fleshing out in ways I am not familiar with,
> but I wanted to get something out to start as we think that this is
> probably the best path forward for bringing back Tor Browser's
> optimistic SOCKS behavior.
>
> -tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Optimistic SOCKS Data

2019-06-21 Thread Tom Ritter
The attached is a draft proposal for allowing tor to lie to an
application about the SOCKS connection enabling it to send data
optimistically.

It's going to need some fleshing out in ways I am not familiar with,
but I wanted to get something out to start as we think that this is
probably the best path forward for bringing back Tor Browser's
optimistic SOCKS behavior.

-tom
Filename: xxx-optimistic-socks-in-tor.txt
Title: Optimistic SOCKS Data
Author: Tom Ritter
Created: 21-June-2019
Status: Draft
Ticket: #5915

0. Abstract

   We propose that tor should have a SocksPort option that causes it to lie
   to the application that the SOCKS Handshake has succeeded immediately,
   allowing the application to begin sending data optimistically.

1. Introduction

   In the past, Tor Browser had a patch that allowed it to send data
   optimistically. This effectively eliminated a round trip through the
   entire circuit, reducing latency.

   This feature was buggy, and specifically caused problems with MOAT, as
   described in [0] and Tor Messenger as described in [1]. It is possible
   that the other issues observed with it were the same issue, it is
   possible they were different.

   Rather than trying to identify and fix the problem in Tor Browser, an
   alternate idea is to have tor lie to the application, causing it to send
   the data optimistically. This can benefit all users of tor. This
   proposal documents that idea.

   [0] https://trac.torproject.org/projects/tor/ticket/24432#comment:19
   [1] https://trac.torproject.org/projects/tor/ticket/19910#comment:3

2. Proposal

2.1. New SocksPort Flag

   In order to have backward compatibility with third party applications that
   do not support or do not want to use optimistic data, we propose a new
   SocksPort flag that needs to be set in the tor configuration file in order
   for the optimistic beahvior to occur.

   The new SocksPort flag is:

  "OptimisticData" -- Tor will immediately report a successful SOCKS
  handshake and hang up if it gets an end cell
  rather than a connected cell.

3. Application Error Handling

   This behavior will cause the application talking to Tor to potentially
   behave abnormally as it will believe that it has completed a TCP
   connection. If no such connection can be made by tor, the program may
   behave in a way that does not accurately represent the behavior of the
   connection.

   Applications SHOULD test various connection failure modes and ensure their
   behavior is acceptable before using this feature. 

References:

[RFC1928] https://www.ietf.org/rfc/rfc1928.txt
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 302: Hiding onion service clients using WTF-PAD

2019-05-16 Thread Tom Ritter
On Thu, 16 May 2019 at 11:20, George Kadianakis  wrote:
> 3) Duration of Activity ("DoA")
>
>   The USENIX paper uses the period of time during which circuits send and
>   receive cells to distinguish circuit types. For example, client-side
>   introduction circuits are really short lived, wheras service-side
>   introduction circuits are very long lived. OTOH, rendezvous circuits 
> have
>   the same median lifetime as general Tor circuits which is 10 minutes.
>
>   We use WTF-PAD to destroy this feature of client-side introduction
>   circuits by setting a special WTF-PAD option, which keeps the circuits
>   open for 10 minutes completely mimicking the DoA of general Tor 
> circuits.

10 minutes exactly; or a median of 10 minutes?  Wouldn't 10 minutes
exactly be a near-perfect distinguisher? And if it's a median of 10
minutes, do we know if it follows a normal distribution/what is the
shape of the distribution to mimic?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] #3600 tech doc

2019-03-13 Thread Tom Ritter
New development:
https://webkit.org/blog/8613/intelligent-tracking-prevention-2-1/

In particular:

-
WebKit implemented partitioned caches more than five years ago. A
partitioned cache means cache entries for third-party resources are
double-keyed to their origin and the first-party eTLD+1. This
prohibits cross-site trackers from using the cache to track users.
Even so, our research has shown that trackers, in order to keep their
practices alive under ITP, have resorted to partitioned cache abuse.
Therefore, we have developed the verified partitioned cache.

When a partitioned cache entry is created for a domain that’s
classified by ITP as having cross-site tracking capabilities, the
entry gets flagged for verification. After seven days, if there’s a
cache hit for such a flagged entry, WebKit will act as if it has never
seen this resource and load it again. The new response is then
compared to the cached response and if they match in the ways we care
about for privacy reasons, the verification flag is cleared and the
cache entry is from that point considered legitimate. However, if the
new response does not match the cache entry, the old entry is
discarded, and a new one is created with the verification flag set,
and the verification process starts over.

ITP currently does this verification for permanent redirects since
that’s where we see abuse today.
--

It's not clear to me if the permanent redirects are in a partitioned
cache though. Either way, this doesn't affect Tor too much given that
we don't save history.

Although it does bring up a simple case that e could implement with no
problem: never remember a permanent redirect.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] #3600 tech doc

2019-01-18 Thread Tom Ritter
On Fri, 18 Jan 2019 at 21:00, Richard Pospesel  wrote:
> The Double-Keyed Redirect Cookies + 'Domain Promotion' tries to fix this
> multiple/hidden session problem by promoting the cookies of double-keyed
> websites to first-party status in the case where the originating domain is
> positively identified as solely a redirect. In the gogle.com -> google.com
> scenario, if Tor Browser could identify that gogle.com is used solely to
> redirect to google.com, then we could take the double-keyed 
> gogle.com|google.com
> cookies and move them into the google.com bucket and eliminate the double
> session.

How would we detect this?

Let's say hypothetically (I haven't checked) gogle.com does not set
any cookies; and just sends a 301 permanent redirect.  We then perform
the upgrade from gogle.com|google.com to google.com

If we turn it on its head: google.com decides to redirect you to
tracker342451345.google.com with a 301 (and setting no cookies.) We
upgrade google.com|tracker342451345.google.com to
tracker342451345.google.com and do so for as long as your session is
open.
Does this enabling a tracking vector? I don't think so; couldn't
identify one - but it feels like there might be something here...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] #3600 tech doc

2018-11-15 Thread Tom Ritter
I spent some time reading through the Mix and Match proposal. I'm not
sure I understand it.

In particular, I am confused about:

The proposal seems to focus heavily on what we do with state we
receive as part of the redirect. Do we promote it, do we leave it
double keyed. It doesn't seem to explain how we choose what state to
_send_. For example:

> For instance, in a redirect chain from foo.com -> tracker.com -> bar.com,
> the tracker.com cookies will be double keyed foo.com|tracker.com, while
> the bar.com cookies will be double keyed foo.com|bar.com. However, after
> the user begins to interact with bar.com, bar.com is promoted to be the
> First Party Domain, and Cookies set on the initial redirect need to be
> moved under the bar.com key.

When we send a request to foo.com, I assume we will send any current
cookies we have keyed under foo.com|foo.com[0]. When we receive a
redirect to tracker.com - how do we choose what state to send? We
don't know head of time whether it will give us a redirect or not, so
are we sending it any state we have under tracker.com|tracker.com
(treating it as a first party) or are we sending it any state we have
under foo.com|tracker.com?

The latter is better for privacy; but it would require you to
re-sign-in via Oauth a lot (pretend tracker.com is oauth.com); and I'm
nervous it would break login flows. Especially if you interact with
oauth.com and that seems to promote it into oauth.com|oauth.com and
then you later go through foo.com|oauth.com and there's no state
there...


[0] I'm pretty sure that we use the First Party Domain as both the
primary and secondary key for state under the first party; right? In
any event, when I say foo.com|foo.com I mean data keyed under the
foo.com first party.


I'm also a bit confused about the difference between different targets
of redirects.  It seems like:
- If the target is example.com: we don't double-key or need to promote
upon interaction
- If the target is example.com?lang=en: we do double-key any state
set, and upon user interaction promote the state to first party.
- If the target is example.com/foo/bar.html: we do double-key any
state set, and upon user interaction promote the state to first party.


Finally, in a multi-redirect scenario like a.com -> b.com -> c.com,
I'm unsure if there is a difference in how we handle state we receive
for b.com if:
- The target is b.com
- The target is b.com?lang=en
- The target is b.com/foo/bar.html




I started drawing out a matrix of what happens when. I came up with
the following. I don't think I understand the proposal well enough to
fill it out.  I'm hoping I will be able to do so though! I'm going to
paste it in its entirety:


--
Single-Redirect, Before User Interaction

Click a link for aaa.com/foo/blah.html and the response redirects to
ccc.com (before any user interaction):
- To aaa.com you send state keyed under aaa.com|aaa.com
- To ccc.com you send state keyed under ccc.com|ccc.com
- The browser deposits you at ccc.com
- Any cookies or other state set by aaa.com is set normally according
to FPI rules, so will be keyed under aaa.com|aaa.com
- Any cookies or other state set by ccc.com is set normally according
to FPI rules, so will be keyed under ccc.com|ccc.com

Click a link for aaa.com/foo/blah.html and the response redirects to
ccc.com?lang=en (before any user interaction):
- To aaa.com you send state keyed under ???
- To ccc.com you send state keyed under ???
- The browser deposits you at ??
- Any cookies or other state set by aaa.com is keyed under ??
- Any cookies or other state set by ccc.com is keyed under ??

Click a link for aaa.com/foo/blah.html and the response redirects to
ccc.com/new-foo/blah.html (before any user interaction):
- To aaa.com you send state keyed under ???
- To ccc.com you send state keyed under ???
- The browser deposits you at ??
- Any cookies or other state set by aaa.com is keyed under ??
- Any cookies or other state set by ccc.com is keyed under ??

--
Single-Redirect, After User Interaction
Perhaps you scroll the page at ccc.com or perhaps click a link or
highlight some text.

Click a link for aaa.com/foo/blah.html and the response redirects to
ccc.com, and then you interact:
- To aaa.com you send state keyed under aaa.com|aaa.com
- To ccc.com you send state keyed under ccc.com|ccc.com
- The browser deposits you at ccc.com
- There is no change to state for aaa.com, as it is already stored
under aaa.com|aaa.com
- There is no change to state for ccc.com, as it is already stored
under ccc.com|ccc.com

Click a link for aaa.com/foo/blah.html and the response redirects to
ccc.com?lang=en, and then you interact:
- To aaa.com you send state keyed under ???
- To ccc.com you send state keyed under ???
- The browser deposits you at ??
- Any cookies or other state set by aaa.com is migrated(?) and now
keyed under ??
- Any cookies or other state set by ccc.com is migrated(?) and now
keyed under ??

Click a link for aaa.com/foo/blah.html and 

Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Onion-Location HTTP header

2018-10-23 Thread Tom Ritter
On Tue, Oct 23, 2018, 12:15 PM Alec Muffett  wrote:

>
> The world has changed since Tor was first invented; perhaps it's time that
> we stopped trying to hide the fact that we are using Tor? Certainly we
> should attempt to retain the uniformity across all tor users - everybody
> using Firefox on Windows and so forth - but the fact that/when traffic
> arrives from Tor is virtually unhideable.
>
> Consciously sacrificing that myth would make uplift to onion networking so
> much simpler.
>

I agree.

In particular because I want to avoid false positives and false negatives
in the reputation system.

But by what mechanism do we expose this information? I can't think of one
that doesn't have significant drawbacks.  And what do we say/what do we
mean? (I am onion capable?)

>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Onion-Location HTTP header

2018-10-23 Thread Tom Ritter
On Wed, 26 Sep 2018 at 06:51,  wrote:
> ...

I want to compare your proposal with the simple situation of "If the
server gets a connection from a Tor exit node, return Location:
blah.onion."  (This would also separate the cookie space)

If I understand your proposal correctly, the differences are:

1) Because the client offers h2o in ALPN, the server knows (at initial
TLS connection time) the client supports .onion transport.  (We also
leak this out onto the network in plaintext; but since it comes from
an exit node it's not too concerning?)

2) The server offers h2o as an Alt-Svc protocol, so any client not
supporting onions will ignore it gracefully. There is no concern that
the server could send a Location: blah.onion header to a client not
supporting onions; or omit it from a client supporting onions.

3) Tor Browser can implement additional authentication checks for the
transfer from blah.com -> blah.onion

I'm not clear if the connection to blah.onion involves HTTPS or HTTP;
but I suppose both are possible.

Because the response from the server comes from


So I like to try and keep the intent of headers as close as possible.
Alt-Svc is used to redirect routing without visible changes and
without user prompting. That's why I'm opposed to Alt-Svc:
h2/blah.onion prompting the user, and opposed to the Location: header
prompting the user but am perfectly supportive of a new Onion-Location
header prompting the user.  Creating h2o for Alt-Svc and implementing
it in a way that redirects the user violates the intent of Alt-Svc.

Additionally, ALPN is designed for negotiating an Application Layer
Protocol - what you speak inside TLS. h1 and h2 are different
protocols, so one uses ALPN to negotiate h2 in the TLS connection, and
the first byte of the application layer protocol is HTTP2. In your
proposal; you negotiate a different protocol, but still speak h2.
Actually it's not clear if one should speak HTTP or HTTP2 to the
server! (We could require http2 but that seems like a bad idea.)

The response from the server still comes in the Alt-Svc header; so
there's no connection latency to be avoided.

I like the goal of giving server operators the ability to separate
cookie spaces.  But I think that's accomplished by both a prompted
Onion-Location redirect or a non-prompted Location redirect.

I like the goal of having no ambiguity or confusion about what a
browser that does/doesn't support onion should do with an onion
address or possibility of not serving an onion address to someone who
should get.  Onion-Location solves this for prompted redirects.
Location does not solve this for promptless redirects. (We could add a
'force' parameter to Onion-Location to bypass the prompt. I think this
is a good idea and would suggest we add it to the proposal.)

I think the idea of allowing Tor Browser to require and perform
additional security checks for the transfer from http(s) -> onion. But
I don't see why they couldn't be added/performed as part of the
Onion-Location transfer.

I like the idea of using ALPN for something; but I don't think this is
the right problem to use it for.  Because it's used for Application
Layer Protocol selection, it is the perfect choice to use to negotiate
a Tor connection or a Pluggable Transport connection to a server
supporting both a website and Tor.  (Imagine if that were deployed on
something like Github!) But it's _in plaintext_. So any connection
offering such an ALPN could be blocked. I'm still disappointed the WG
chose ALPN instead of NPN. With this plaintext limitation, I don't
know what we could use ALPN for.  Maybe we could use it inside normal
Tor connections to negotiate a new Tor protocol that didn't nicely fit
into the existing tor protocol version negotiation.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2018-09-21 Thread Tom Ritter
> with the exact same
> restrictions and semantics as the Location HTTP header

Maybe that should be 'syntax'?  Semantics would mean that the header
behaves the same way right?  But it doesn't. Location is a prompt-less
redirect, O-L is a prompted redirect.  Additionally, O-L has an
additional restriction that the URI specified must be .onion?

> websites with lots of client traffic are encouraged

Why do we need to encourage them? Aren't they sufficiently motivated
themselves? I would go so far as to suggest they do _not_ do that,
because there is no fully reliable detection mechanism.  But if they
want to, we 'can provide suggestions for them'?

And perhaps one suggestion is to detect User-Agent and only serve it
to one of the five user-agents that support Tor? (Since we discourage
anything else?)  (TB, TBA, Brave, Orfox, OnionBrowser)

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bandwidth scanner: request for feedback

2018-08-30 Thread Tom Ritter
On 29 August 2018 at 16:11, Mike Perry  wrote:
> Ideally, I would like us to perform A/B experiments to ensure that our
> performance metrics do not degrade in terms of average *or* quartile
> range/performance variance. (Ie: alternate torflow results for a week vs
> sbws for a week, and repeat for a few weeks). I realize this might be
> complicated for dirauth operators, though. Can we make it easier
> somehow, so that it is easy to switch which result files they are voting
> with?

Having both voting files means running both scanners at the same time.
Depending on one's pipes, that might skew the results from the
scanners.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] oss-fuzz Coverage

2018-08-29 Thread Tom Ritter
tor is in OSS-Fuzz, and I recently found this very slick dashboard
that shows you just what coverage tor is getting out of it:
https://storage.googleapis.com/oss-fuzz-coverage/tor/reports/20180829/linux/report.html

Thought I'd share in case others hadn't seen it (I think it's fairly new.)

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Brief state of sbws thoughts

2018-07-19 Thread Tom Ritter
I'm happy and prepared to run sbws and torflow side by side. I'm a
little less swamped than I was a month ago.  I don't need a debian
package; I'd rather run it from a git clone.

I think the only things I can't do are
a) give you access to the box directly (but I can make whatever
files/logs/raw results that you want available to you over HTTP)
b) stop running torflow. (Unless we're ready to switch a live bwauth
over to sbws.)

FWIW, I have the advantage of having archived my (semi-)raw bwauth
data for a while: https://bwauth.ritter.vg/bwauth/

-tom

On 19 July 2018 at 10:16, juga  wrote:
> Matt Traudt:
>> Teor, Juga
>>
>> There's a lot of things fighting for my attention right now, so you
>> might have noticed I've slowed way down on attending to sbws
>> tickets/PRs/etc. I think time will free up in the next few days.
>>
>> I think sbws is in a very good place code-wise right now. I don't think
>> much more **has** to be done to the code. Even though I enjoy adding
>> things like the state file (GHPR#236 [2]), I don't think that was a good
>> use of my time.
>>
>> It looks like there's a lot of check boxes Juga has made regarding
>> making a Debian package[0]. Those should get checked. These are important.
>>
>> However, I think the absolute most important thing for us to be spending
>> our time on right now is deciding what "good" results are and verifying
>> sbws produces "good" results.
>>
>> To accomplish this, I think one of the two suggestions I made in a
>> comment on GH#182 [1] (quoted here) is what we should be doing.
>>
>> 1. Run torflow and sbws side-by-side (but not at the same time) to
>> remove more variables. This has the added benefit of us having access to
>> the raw scanner results from torflow before it does whatever magic
>> scaling it does. OR
>
> In that ticket you also mentioned that someone that already runs torflow
> should also run sbws.
> I said i can run both, and still the case if needed.
>
>> 2. Ask for access to raw scanner results from someone running torflow.
>>
>> I fear sbws is doomed to die the death of the new bandwidth scanners
>> before it if we don't start seriously verifying sbws is "good" or if I
>> personally slowly stop working/coordinating work on it.
>
> I don't think that's the case. I've not forget it... and i'm sure teor
> neither.
> Some of the last work we have done is regarding getting the bandwidth
> files archived, what will also help to determine whether sbws results
> are "good".
>
> If 1. would be run by someone else, getting [0] done is indeed important
> and i'm currently working on it.
>
> And maybe we aren't able to determine how "good" sbws results are until
> it actually starts being run by dirauths, for which [0] is still important.
>
> Thanks for sharing your thoughts,
> juga.
>
>> Thanks
>>
>> Matt
>>
>> [0]: https://trac.torproject.org/projects/tor/ticket/26848
>> [1]:
>> https://github.com/pastly/simple-bw-scanner/issues/182#issuecomment-404250053
>> [2]: https://github.com/pastly/simple-bw-scanner/pull/236
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2018-07-13 Thread Tom Ritter
On 7 July 2018 at 13:07, Iain Learmonth  wrote:
> Hi,
>
> I've had a go at implementing this for my personal blog. Here are some
> things:

Good feedback!

> My personal website is a static site (mostly). In my implementation, I
> took a list of all possible HTML URLs (excluding images, stylesheets,
> etc.) and generated a list of corresponding onion locations.
>
> I figured that being a blog, people often link to individual pages
> instead of just to my homepage (which is probably the least useful page
> on the site). Having the Onion-Location header on every page someone
> could land on gives the best chance that they will discover the onion
> service.

Ah, that makes sense. You want /foo.html to serve an Onion-Location
that goes to /foo.html

But you're saying you did this manually for each file?  I guess I
hadn't thought about how I would implement this (for Apache)... http
-> https redirection is done with mod_write, typically something like

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L]

I don't mess with Apache/mod_rewrite much, but surely there's a way to
write out the Onion-Location header with the supplied path/querystring
automatically?


> But then I realised that some of the locations I had generated
> Onion-Locations for would also be serving Location headers as they were
> old URLs. What should a browser do in this case? What should an
> implementer do? In my implementation, I've thrown in the Onion-Location
> headers regardless of whether or not a Location header is also present
> because it was easier.

I think that is fine but

> It could be preferable that the redirection is followed after switching
> to the Onion service (i.e. Location header is ignored until user
> responds to the Onion-Location header prompt), but this would mean the
> page wouldn't have loaded before you get the prompt to go to the Onion
> service, which may be confusing for users. Alternatively, if the page
> has a Location header then the Onion-Location header should be ignored.

I agree that if a Location header is present, the browser should
follow it immediately. If the subsequent location has an
Onion-Location header (and no Location header) then the browser should
prompt.

Location is a non-prompt, non-negotiable redirect.
Onion-Location is a prompted, user-chosen redirect.

The only question in my mind is if the user has opted in to always
following Onion-Location redirects, then the question is: which header
do you follow? And I would suggest Onion-Location although I don't
have a strong argument for that choice besides "It's our feature, we
should give it precedence."

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Notes from 12 April 2018 Simple Bandwidth Scanner Meeting

2018-04-12 Thread Tom Ritter
I'm happy to run a sbws alongside my torflow. It will let us compare bw
numbers apples to apples too.  My only difficulty is being unable to spend
significant time to diagnose why it doesn't work, if it doesn't work.

If it's at the point I should give it a shot, point me at some instructions
:)

-tom

On Thu, Apr 12, 2018, 7:38 PM Matt Traudt  wrote:

> See below for the pad notes. Next meeting is scheduled 19 April 2018 at
> 2200 UTC in #tor-meeting. (This one was held in #tor-dev, but we should
> use meetbot).
>
> -
>
>
> Simple Bandwidth Scanner meeting 12 April 2018
>
>  Updates/Status messages 
>
> pastly:
> What's on my plate? <- doesn't have to be all in your plate :P
> - Test coverage getting closer to 100%
> - Immediate future: switch to standard python logging module, which
> is quite good
> - Improving documentation
> - Checking results against torflow
> - Monitor CPU of sbws client/server
> - +1 on considering asyncio
> - See how chutney generates random strings
> - Run testnet authority
> - Reach out to current auths about running sbws/torflow and adding
> me as an auth
>
> juga:
> - open/close PRs/issues about things to improve in doc, refactor
> code, etc..., but not changing functionality
> - re. doc:
> - thought to update sbws spec (or create other) to doc
> differences with Torflow, not sure it's useful
> - i'd document further some of the classes/functions (as
> measure_relay)
> - code doc vs spec (see below)
> - find box to run other sbws, bwauth also in testnet?
>
> ## Topic: what is still missing for milestone 1? (aka 1st release, v1.0.0)
> - could we create all tickets needed to achive it?
> - maybe previous list is enough?
> Missing:
> - A consensus parameter stating the scaling factor
> - sbws config option to set fallback if no consensus param
> - `sbws generate` code to use the consensus param
>
> -
>
> https://stem.torproject.org/api/descriptor/networkstatus.html#stem.descriptor.networkstatus.NetworkStatusDocumentV3
>
> - Correlation coefficient on comparision graphs
>
>
>
>
> ## Topic: comparing to torflow
> tah
> - Can we make the test sbws deployment a little bigger?
> - What else needs to be compared?
>
> teor: actually running it in a voting network, to check the feedback
> loop (if any) the scaling
>
> - Conclusions after comparing?
> - what we could think to change/improve after comparing?
>
> Graphs pastly can explain:
> - sbws vs moria, sorted by sbws:
> https://share.riseup.net/#-W_zqcv-08AX4SnOgTatUw
> - sorted by moria: https://share.riseup.net/#URXp6NccZHEhOPFJQcfO4w
>
>
> teor: the correlation seems good here
> If we're going to use these charts to compare, please compare two
> existing bwauths
> See: https://share.riseup.net/#lPGcIrgHp3ftnvTHUKqOKg (but ignore the
> sbws-scaled line, it's wrong wrong wrong)
>
>
> ## Topic: convincing people to run sbws
> juga: maybe something to do when 1st sbws release?
> pastly: yes, probalby. unless we need to convince testnet people <- ah,
> right i was thinking on the Tor net
>
>
> ## Topic: status of open sourcing sbws
> - No real update. Time is still passing.
>
> ## Topic: specifications
>
> torflow/BwAuthority:
>
> https://gitweb.torproject.org/torflow.git/tree/NetworkScanners/BwAuthority/README.spec.txt
> ,
> https://ohmygodel.com/publications/peerflow-popets2017.pdf has a section
> that also makes a nice summary
> sbws:
>
> https://github.com/pastly/simple-bw-scanner/blob/master/docs/source/specification.rst
> (ask Pastly for access)
> bwscanner: no spec, but reading
>
> https://github.com/TheTorProject/bwscanner/blob/develop/bwscanner/circuit.py#L45
> it looks like a Torflow clone <- almost :)
>
> We need a spec for the v3bw file that tor reads (in torspec/dir-spec.txt)
> We need a spec for bwauth migration, including acceptance criteria for
> new bwauth implementations
> Scanners should have their own detailed design documents
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Consensus-health single-relay data

2018-04-06 Thread Tom Ritter
This now supports exact nickname, partial fingerprint, and multiple of
each; and is deployed.  Other updates are:

- Known flags are not omitted from summary
- A footnote is given when an assigning bwauth is not present
- the misleading sha-1 signature algorithm is removed
- Atlas became Relay Search
- You can search for relays in votes that are missing a flag (e.g.
that are struck-through) by prefacing the flag with a "!".  E.g.
"!HSDIR" on the detailed will show relays where one DirAuth did not
vote for the HSDir flag, but enough DirAuths did such that it was
granted the flag.  This is particularly useful for !ReachableIPv6

On 9 March 2018 at 13:55, teor <teor2...@gmail.com> wrote:
>
>
>> On 9 Mar 2018, at 20:28, Tom Ritter <t...@ritter.vg> wrote:
>>
>> I have tested it on Tor Browser and High Security Slider, seems to
>> work for me, but I want feedback on the UX and for bugs
>
> Wow! It works! And it even works in iOS Safari.
>
> Also, there is a feature? where you can keep pasting relay fingerprints
> into the box, and it will keep adding them to the end of the page.
>
> How hard would it be to add support for:
> * multiple fingerprints
> * nicknames
> * partial fingerprints
>
> I don't know how important each of these features are.
> But I bet that nicknames would be the first feature request from most users.
>
> T
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Scaling bandwidth scanner results

2018-03-18 Thread Tom Ritter
After #1 is decided, we can convert past bwauth data, can't we?  If
it's helpful I can (at some point) compare your data against
historical (converted) data as I've been doing:
https://tomrittervg.github.io/bwauth-tools/

-tom

On 18 March 2018 at 20:22, Matt Traudt  wrote:
> I've made some good progress on a bare bones, doesn't-do-more-than-it-
> has-to bandwidth scanner. It can generate output just like torflow[0].
>
> We need to decide on how to scale results that come from different
> measurement systems. The simple, don't-make-it-harder-than-it-has-to-be
> idea is (quote [1], see [2]):
>
>> Express each weight as a proportion of the total, and multiply by
> some agreed total (e.g. for the current network it would have to be the
> total of the consensus weight, but within some limited range to avoid
> unbounded growth).
>
> So we need to:
>
> 1. Decide on a large total.
>
> I suggest 50 million to start the conversation (bike shedding) based on
> that being close to the current total consensus weight so relay
> operators won't see a large (though inconsequential) change.
>
> 2. Have all the torflow operators switch to this new method.
>
> Ouch. I wouldn't mind being told I'm wrong about this step being
> necessary.
>
> 3. Find all the places that hint at consensus weight being directly
> comparable to bandwidth (such as [3]) and change the wording.
>
> Matt
>
> [0]: https://paste.debian.net/1015409/
> [1]: https://trac.torproject.org/projects/tor/wiki/org/meetings/2018Rom
> e/Notes/BandwidthAuthorityRequirements#Scaling
> [2]: https://trac.torproject.org/projects/tor/ticket/25459
> [3]: https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2290
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Consensus-health single-relay data

2018-03-09 Thread Tom Ritter
Please test http://utternoncesense.com/ : check the end of the page.
(Note this is a static link, I'm not updating it every hour).

I have tested it on Tor Browser and High Security Slider, seems to
work for me, but I want feedback on the UX and for bugs. Obviously
this doesn't work with js disabled, but it should at least tell you
what the issue is.

---

Under the covers while generating the detailed page, I also generate a
list of indexes for each relay. I lazy-load that list at the end of
the index page load (about 700kb). When you ask for a fingerprint it
makes a range request using that index. Even trying to be as speedy as
I could be, the lazy-load is quite fast. I'm hopeful the use case of
trying to paste a fingerprint in as fast as one can is acceptable and
speedy, so give that a shot.

I almost was able to do with with no apache config, but it turns out
that the HTTP spec is ambiguous about how to handle Range requests on
gzipped content, so Apache just doesn't. 'SetEnvIf
X-Requested-With XMLHttpRequest no-gzip' will fix it for this use
case.

In the future I expect to be able to extend this to inline-load
historical data from the prior consensus when you click the <- button;
but I have to give some more thought to how I want to display that.
(And it's more complicated in general.)

-tom

On 7 March 2018 at 15:43, nusenu <nusenu-li...@riseup.net> wrote:
>
>
> Tom Ritter:
>> teor suggested the other day that it'd be really useful to be able to
>> see the vote data for a single relay;
>
> great to see this is being worked on - thanks for that!
> Previously we also mentioned this feature in the context of onionoo,
> but processing votes is not something onionoo does currently (and it is a lot
> more data to take).
>
>
> --
> https://mastodon.social/@nusenu
> twitter: @nusenu_
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Consensus-health single-relay data

2018-03-07 Thread Tom Ritter
teor suggested the other day that it'd be really useful to be able to
see the vote data for a single relay; since the _entire_ detailed page
is huge and unwieldy.

I've been pondering how I could support this without complicating the
server, which results in a few constraints:
a) I really don't want to create one file per-relay
b) I really don't want to make any server-side logic, I'd like to keep
it as static HTML and client-side javascript.  (With the client-side
javascript optional for most of the information.)


I think I can make something work using AJAX and Range requests
though. Before I fly forward with that though, I wanted to understand
a bit more about the use cases where one wants to see per-relay vote
data.

The biggest question I have for you teor (and anyone else who wants to
chime in) is how do you know what relay you want the information for?
Do you know the fingerprint?

If so, I could try adding a text box at the bottom of the index page
saying "Supply a fingerprint, and I will populate vote data for this
relay inline".

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop-meeting] [prop#267] "Tor Consensus Transparency"

2018-02-17 Thread Tom Ritter
On 17 February 2018 at 00:31, isis agora lovecruft
 wrote:
> 1. Tuesdays @ 18:00 UTC (10:00 PST/13:00 EST/20:00 CET/05:00+1 AEDT)

This time works for me.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] monitoring significant drops of flags in dirauth votes

2018-02-11 Thread Tom Ritter
I think the doctor notification is the best mechanism.

I'm not opposed to adding more graphs to consensus-health, but I think
I'd want to coordinate with the metrics team. There was talk about
them absorbing consensus health in some capacity, so I'd prefer to
avoid doing a lot of work on graphs if it's going to be redone or
throw away.

The host running depictor was down for several days, which explains
the gap in data.

Thanks for the thoughts!

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Expose raw bwauth votes

2018-01-15 Thread Tom Ritter
Updated proposal attached.

On 12 December 2017 at 12:44, isis agora lovecruft <i...@torproject.org> wrote:
>> Status: Open
>
> I changed things recently, you'll need a "Ticket:" field if your proposal is
> in state {OPEN,ACCEPTED,CLOSED,FINISHED}. [0]
>
> (Although, maybe we shouldn't require "Ticket:" for state OPEN, so as not to
> hinder calling it OPEN and discussing it even for those things which don't
> yet have tickets?)

I added Ticket: https://trac.torproject.org/projects/tor/ticket/21377


>> An authority SHOULD publish the bwauth vote used to calculate its
>> current vote. It should make the bwauth vote file available at the
>> same time as its normal vote file. It should make the file available
>> at
>>   http:///tor/status-vote/next/bwauth.z
>
> If it's "next", how is it possible to expose it at the same time as its vote
> which is based upon it?  Maybe we should change the URL to be "current"?

teor suggested 'now'?  I'll make it whichever you think it should be =)


>> The raw bwauth vote file does not [really: is not believed to] expose
>> any sensitive information.  All authorities currently make this
>> document public already, an example is at
>>   https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile
>
> Maybe we want to think about resource exhaustion attacks if we're making a
> standarised interface available to it?  The response after all is going
> likely always be much larger than the request.

teor suggested compressing and streaming from disk?

-tom
Filename: xxx-expose-bwauth_votes.txt
Title: Have Directory Authorities expose raw bwauth vote documents
Author: Tom Ritter
Created: 11-December-2017
Status: Open
Ticket: https://trac.torproject.org/projects/tor/ticket/21377

1. Introduction

Bandwidth Authorities (bwauths) perform scanning of the Tor Network
and calculate observed speeds for each relay. They produce a 'bwauth
vote file' that is given to a Directory Authority. The Directory
Authority uses the speed value from this file in its vote file
denoting its view of the speed of the relay.

After collecting all of the votes from other Authorities, a consensus
is calculated, and the consensus's view of a relay's speed is
determined by choosing the low-median value of all the authorities'
values for each relay.

Only a single metric from the bwauth vote file is exposed by a 
Directory Authority's vote, however the original file contains
considerably more diagnostic information about how the bwauth arrives
at that measurement for that relay.

2. Motivation

The bwauth vote file contains more information than is exposed in the
overall vote file. This information is useful to debug anomalies in
relays' utilization and suspected bugs in the (decrepit) bwauth code.

Currently, all bwauths expose the raw vote file through various (non-
standard) means, and that file is downloaded (hourly) by a single person
(as long as his home internet connection and home server is working)
and archived (with a small amount of robustness.)  

It would be preferable to have this exposed in a standard manner.
Doing so would no longer require bwauths to run HTTP servers to expose
the file, no longer require them to take additional manual steps to
provide it, and would enable public consumption by any interested
parties.  We hope that Collector will begin archiving the files.

3. Specification

An authority SHOULD publish the bwauth vote used to calculate its
current vote. It SHOULD make the bwauth vote file available at all
times, and provide the file that it has most recently used for its
vote (even if the vote is not currently published.) It SHOULD make
the file available at
  http:///tor/status-vote/now/bwauth-legacy.z

It MUST NOT attempt to send its bwauth vote file in a HTTP POST to
other authorities and it SHOULD NOT make bwauth vote files from other
authorities available.

Clients interested in consuming the document should download it when
votes are created. (For the existing Tor network, this is at HH:50,
or 50 minutes after each hour.)

4. Security Implications

The raw bwauth vote file does not [really: is not believed to] expose
any sensitive information.  All authorities currently make this
document public already, an example is at
  https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile

5. Compatibility

Exposing the document presents no compatibility concerns.

The compatibility concern is with applications that want to consume
the document. The bwauth vote file has no specification, and has been
extended in ad-hoc ways. Applications that merely wish to archive the
document (e.g. Collector) won't have a problems. Applications that
want to parse it may encounter errors if a new (unexpected) field is
added, if a new format is specified and fields are removed, or
assumptions are made about the text encoding or formatting of the
document. ___

Re: [tor-dev] Proposal: Expose raw bwauth votes

2018-01-15 Thread Tom Ritter
Sending two replies, with an updated proposal in the second.

On 11 December 2017 at 18:38, teor  wrote:
>> It should make the file available
>> at
>>   http:///tor/status-vote/next/bwauth.z
>
> We shouldn't use next/ unless we're willing to cache a copy of the file
> we actually used to vote. If we do, we should serve it from next/ as
> soon as we vote using it, then serve it from current/ as soon as the
> consensus is created.
>
> If we don't store a copy of the file, we should use a different URL,
> like status-vote/now/bwauth, and recommend that the file is fetched at
> hh:50, when the votes are created. This would allow us to implement
> current/bwauth and next/bwauth in a future version.

Sure.

> Have you thought about versioning the URL if we have multiple flavours
> of bwauth file? We could use bwauth- like consensuses.

For lack of a better name I'll propose bwauth-legacy?

> Also, Tor has new compression options for zstd and lzma.
>
> Given that this is an externally-controlled file, we could stream it
> from disk and compress it on the fly with something cheap like gzip
> or zstd.

I haven't seen any indicated in dir-spec how to handle those? Or how I
should change the proposal to accommodate them? Should I make the url
.gz and say that the DirAuth should compress it and stream it from
disk?



>> The raw bwauth vote file does not [really: is not believed to] expose
>> any sensitive information.  All authorities currently make this
>> document public already, an example is at
>>   https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile
>
> How large is the file?
> Maybe we should pre-compress it, to avoid CPU exhaustion attacks.
> If we did this, we could say that it's safe, as long as it is smaller
> than the full consensus or full set of descriptors.

~2.6MB. See above.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [tor-project] Intent to Minimise Effort: Fallback Directory Mirrors

2018-01-08 Thread Tom Ritter
On 8 January 2018 at 20:56, teor  wrote:
> Add a torrc option and descriptor line to opt-in as a FallbackDir [4]

Setting a config entry is easy and requires no thought. It's easy to
set without understanding the requirements or implications. Getting a
personal email and request for one's relay to keep a stable IP lends a
lot of gravity to the importance of it.

Are you worried about that?

Work-wise, the worst thing that would happen is that the list just
needs to be regenerated more frequently - I think? But user-wise
people on those older versions will have significantly increased
startup times if only 20% of their FallbackDirs are working...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Proposal: Expose raw bwauth votes

2017-12-11 Thread Tom Ritter
I'm not sure, but I think
https://trac.torproject.org/projects/tor/ticket/21377 needed a
proposal so I tried to write one up.

-tom
Filename: xxx-expose-bwauth_votes.txt
Title: Have Directory Authorities expose raw bwauth vote documents
Author: Tom Ritter
Created: 11-December-2017
Status: Open

1. Introduction

Bandwidth Authorities (bwauths) perform scanning of the Tor Network
and calculate observed speeds for each relay. They produce a 'bwauth
vote file' that is given to a Directory Authority. The Directory
Authority uses the speed value from this file in its vote file
denoting its view of the speed of the relay.

After collecting all of the votes from other Authorities, a consensus
is calculated, and the consensus's view of a relay's speed is
determined by choosing the low-median value [or is it high-median?]
of all the authorities' values for each relay.

Only a single metric from the bwauth vote file is exposed by a 
Directory Authority's vote, however the original file contains
considerably more diagnostic information about how the bwauth arrives
at that measurement for that relay.

2. Motivation

The bwauth vote file contains more information that is exposed in the
overall vote file. This information is useful to debug anomalies in
relays' utilization and suspected bugs in the (decrepit) bwauth code.

Currently, all bwauths expose the raw vote file through various (non-
standard) means, and that file is downloaded (hourly) by a single person
(as long as his home internet connection and home server is working)
and archived (with a small amount of robustness.)  

It would be preferable to have this exposed in a standard manner.
Doing so would no longer require bwauths to run HTTP servers to expose
the file, no longer require them to take additional manual steps to
provide it, and would enable public consumption by any interested
parties.  We hope that Collector will begin archiving the files.

3. Specification

An authority SHOULD publish the bwauth vote used to calculate its
current vote. It should make the bwauth vote file available at the
same time as its normal vote file. It should make the file available
at
  http:///tor/status-vote/next/bwauth.z

It MUST NOT attempt to send its bwauth vote file in a HTTP POST to
other authorities and it SHOULD NOT make bwauth vote files from other
authorities available.

4. Security Implications

The raw bwauth vote file does not [really: is not believed to] expose
any sensitive information.  All authorities currently make this
document public already, an example is at
  https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile

5. Compatibility

Exposing the document presents no compatibility concerns.

The compatibility concern is with applications that want to consume
the document. The bwauth vote file has no specification, and has been
extended in ad-hoc ways. Applications that merely wish to archive the
document (e.g. Collector) won't have a problems. Applications that
want to parse it may encounter errors if a new (unexpected) field is
added, or assumptions are made about the text encoding or formatting
of the document. ___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-12-08 Thread Tom Ritter
On 8 December 2017 at 15:48, teor <teor2...@gmail.com> wrote:
>
> On 9 Dec 2017, at 03:27, Tom Ritter <t...@ritter.vg> wrote:
>
>>> We introduce a new HTTP header called "Onion-Location" with the exact same
>>>   restrictions and semantics as the Location HTTP header.
>>
>> For reference, this is https://tools.ietf.org/html/rfc7231#section-7.1.2
>
> Because this is a non-standard header, does it need to be spelled:
> "X-Onion-Location"?

Nope =)
https://tools.ietf.org/html/rfc6648

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-12-08 Thread Tom Ritter
On 8 December 2017 at 09:06, George Kadianakis  wrote:
> As discussed in this mailing list and in IRC, I'm posting a subsequent
> version of this proposal. Basic improvements:
> - Uses a new custom HTTP header, instead of Alt-Svc or Location.
> - Does not do auto-redirect; it instead suggests the onion based on
>   antonella's mockup: 
> https://trac.torproject.org/projects/tor/attachment/ticket/21952/21952.png
>
>
>
> 
> UX improvement proposal: Onion redirects using Onion-Location HTTP header
> 
>
> 1. Motivation:
>
>Lots of high-profile websites have onion addresses these days (e.g. Tor ,
>NYT, blockchain, ProPublica).  All those websites seem confused on what's
>the right way to inform their users about their onion addresses. Here are
>some confusion examples:
>  a) torproject.org does not even advertise their onion address to Tor 
> users (!!!)
>  b) blockchain.info throws an ugly ASCII page to Tor users mentioning 
> their onion
> address and completely wrecking the UX (loses URL params, etc.)
>  c) ProPublica has a "Browse via Tor" section which redirects to the 
> onion site.
>
>Ideally there would be a consistent way for websites to inform their users
>about their onion counterpart. This would provide the following positives:
>  + Tor users would use onions more often. That's important for user
>education and user perception, and also to partially dispell the 
> darkweb myth.
>  + Website operators wouldn't have to come up with ad-hoc ways to 
> advertise
>their onion services, which sometimes results in complete breakage of
>the user experience (particularly with blockchain)
>
>This proposal specifies a simple way forward here that's far from perfect,
>but can still provide benefits and also improve user-education around 
> onions
>so that in the future we could employ more advanced techniques.
>
>Also see Tor ticket #21952 for more discussion on this:
>   https://trac.torproject.org/projects/tor/ticket/21952
>
> 2. Proposal
>
>We introduce a new HTTP header called "Onion-Location" with the exact same
>restrictions and semantics as the Location HTTP header.

For reference, this is https://tools.ietf.org/html/rfc7231#section-7.1.2

> Websites can use the
>Onion-Location HTTP header to specify their onion counterpart, in the same
>way that they would use the Location header.
>
>The Tor Browser intercepts the Onion-Location header (if any) and informs
>the user of the existense of the onion site, giving them the option to 
> visit
>it. Tor Browser only does so if the header is served over HTTPS.
>
>Browsers that don't support Tor SHOULD ignore the Onion-Location header.
>
> 3. Improvements
>
> 4. Drawbacks
>
> 4.1. No security/performance benefits
>
>While we could come up with onion redirection proposals that provide
>security and performance benefits, this proposal does not actually provide
>any of those.
>
>As a matter of fact, the security remains the same as connecting to normal
>websites (since we trust its HTTP headers), and the performance gets worse
>since we first need to connect to the website, get its headers, and then
>also connect to the onion.

I would specifically call out that the user has provided any
identifying information (cookies) that may be present, as well as
opened themselves to any possible browser-based attack vector served
by the target domain.

>Still _all_ the website approaches mentioned in the "Motivation" section
>suffer from the above drawbacks, and sysadmins still come up with ad-hoc
>ways to inform users abou their onions. So this simple proposal will still
>help those websites and also pave the way forward for future auto-redirect
>techniques.
>
> 4.2. Defining new HTTP headers is not the best idea
>
>This proposal defines a new non-standard HTTP header. This is not great
>because it makes Tor into a "special" thing that needs to be supported with
>special headers. However, the fact that it's a new HTTP header that only
>works for Tor is a positive thing since it means that non-Tor browsers will
>just ignore it.
>
>Furthermore, another drawback is that this HTTP header will increase the
>bandwidth needlessly if it's also served to non-Tor clients. Hence websites
>with lots of client traffic are encouraged to use tools that detect Tor
>users and only serve the header to them (e.g. tordnsel).

I would talk about how users could experience false positives and
false negatives if this mechanism is used.



I think it is also worth addressing that this does not stop sysadmins
from (trying to) detect tor users, and send the onion address in the
Location header, thus triggering a non-prompting 

Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Tom Ritter
On 15 November 2017 at 05:35, Alec Muffett  wrote:
> Apologies, I am waiting for a train and don't have much bandwidth, so I will
> be brief:
>
> 1) There is no point in issuing  to anyone unless
> they are accessing  via an exit node.
>
> 2) It's inefficient to issue the header upon every web access by every
> person in the world; when the header is only relevant to 1-in-a-few-thousand
> users, you will be imposing extra bandwidth cost upon the remaining
> 99.99...% -- which is unfair to them

Agreed (mostly). I could see use cases where users not accessing a
website via Tor may wish to know an onionsite is available, but they
are also the vast minority.


> 3) Therefore: the header should only be issued to people arriving via an
> exit node.  The means of achieving this are
>
> a) Location
>
> b) Bending Alt-Svc to fit and breaking web standards
>
> c) Creating an entirely new header
>
> 4) Location already works and does the right thing.  Privacy International
> already use this and issue it to people who connect to their .ORG site from
> an Exit Node.
>
> 5) Bending Alt-Svc to fit, is pointless, because Location already works
>
> 6) Creating a new header? Given (4) and (5) above, the only potential
> material benefit of it that I can see would be to "promote Tor branding" -
> and (subjective opinion) this would actually harm the cause of Tor at all
> because it is *special*.
>
> 6 Rationale) The majority the "Dark Net" shade which had been thrown at Tor
> over the past 10 years has pivoted upon "needing special software to
> access", and creating (pardon me) a "special" header to onionify a fetch
> seems to be promoting the weirdness of Tor, again.
>
> The required goal of redirection to the corresponding Onion site does not
> require anything more than a redirect, and - pardon me - but there are
> already 4x different kinds of redirects that are supported by the Location
> header (301, 302, 307, 308) with useful semantics. Why reinvent 4 wheels
> specially for Tor?


I think there are some additional things to gain by using a new header:

Software that understands the header can handle it differently than
Location. I think the notification bar and the 'Don't redirect me to
the onionsite' options are pretty good UI things we should consider.
They're actually not great UX, but it might be 'doing our part' to try
and not confuse users about trusted browser chrome.[0]

Users who _appear_ to be coming from an exit node but are not using
Tor are not blackholed. How common is this? I've seen reports from
users who do this. If I were in a position to, I would consider having
exit node traffic 'blend into' more general non-exit traffic (like a
university connection) just to make the political statement that "Tor
traffic is internet traffic".

Detecting exit nodes is error prone, as you point out. Some exit nodes
have their traffic exit a different address than their listening
port.[1]


Location is really close to what we need, but it is limited in some
ways. I'm still on the fence.


[0] Except of course that notification bars are themselves spoofable
chrome but lets ignore that for now...
[1] Hey does Exonerator handle these?



On 15 November 2017 at 07:38, Alec Muffett  wrote:
> I can see how you would think that, and I would kind-of agree, but at least
> this would be local and cheap.  Perhaps instead of a magic protocol, it
> should be a REST API that's embedded in the local Tor daemon?  That would be
> a really, REALLY common pattern for an enterprise to query.

This information should already be exposed via the Control Port,
although there would be more work on behalf of the implementer to
parse more information than desired and pare it down to what is
needed.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-14 Thread Tom Ritter
I am a big proponent of websites advertising .onions in their Alt-Srv.


On 14 November 2017 at 06:51, George Kadianakis  wrote:
> 3.1. User education through notifications
>
>To minimize the probability of users freaking out about auto-redirects Tor
>Browser could inform the user that the auto-redirect is happening. This
>could happen with a small notification bar [*] below the URL bar informing
>users that "Tor Browser is auto-redirecting you to the secure onion site".
>
>The notification bar could also have a question mark button that provides
>the user with additional information on the merits of onion sites and why
>they should like them.
>
>[*]: like this one: 
> http://www.topdreamweaverextensions.com/UserFiles/Image/firefox-bar.jpg


I think this is a good idea, and would be the best place to put the
"Never show me this again" option.

> 4.2. No security/performance benefits
>
>While we could come up with auto-redirect proposals that provide security
>and performance benefits, this proposal does not actually provide any of
>those.
>
>As a matter of fact, the security remains the same as connecting to normal
>websites (since we trust its HTTP headers), and the performance gets worse
>since we first need to connect to the website, get its headers, and then
>also connect to the onion.
>
>Still _all_ the website approaches mentioned in the "Motivation" section
>suffer from the above drawbacks, and sysadmins still come up with ad-hoc
>ways to inform users abou their onions. So this simple proposal will still
>help those websites and also pave the way forward for future auto-redirect
>techniques.

I envision a future Onion Everywhere extension like HTTPS Everywhere
that works similar to the HSTS preload list. Crawlers validate a
websites intention to be in the Onion Everywhere extension, and we
cache the Alt-Srv information so it is used on first load.


> 4.3. Alt-Svc does not do exactly what we want
>
>I read in a blog post [*] that using Alt-Svc header "doesn’t change the URL
>in the location bar, document.location or any other indication of where the
>resource is; this is a “layer below” the URL.". IIUC, this is not exactly
>what we want because users will not notice the onion address, they will not
>get the user education part of the proposal and their connection will still
>be slowed down.
>
>I think we could perhaps change this in Tor Browser so that it rewrites the
>onion address to make it clear to people that they are now surfing the
>onionspace.
>
>[*]: https://www.mnot.net/blog/2016/03/09/alt-svc


I am a big opponent of changing the semantics of Alt-Srv.

We'd have to change the semantics to only do redirection for onion
domains. We'd also have to figure out how to handle cases where the
onion lives alongside non-onion (which takes precedence?) We'd also
have to maintain and carry this patch ourselves because it's pretty
antithetical to the very intent of the header and I doubt the
networking team at Mozilla would be interested in maintaining it.

Besides those issues, it also eliminates Alt-Srv as a working option
to something *else* websites may want: to silently redirect users to
their .onion _without_ the possibility of confusion for the user by
changing the address bar. I think Alt-Srv is an option for partial
petname support in TB.

There is a perfectly functioning mechanism for redirecting users: the
Location header. It does a lot of what you want: including temporary
or persistent redirects, updating the addess bar. Obviously it doesn't
work for all users, most don't know what .onion is, so Facebook isn't
going to deploy a .onion Location redirect even if they attempted to
detect TB users.

But they could send a new Onion-Redirect header that is recognized and
processed (with a notification bar) by any UA that supports Tor and
wants to implement it. This header would have a viable path to uplift,
support in extensions, and even standardization. Onion Everywhere can
preload these headers too.



On 14 November 2017 at 11:25, teor  wrote:
>> 4. Drawbacks
>
> You missed the biggest one:
>
> If the onion site is down, the user will be redirected to the downed site.
> (I've used onion site redirects with this issue, it's really annoying.)
> Similarly, if a feature is broken on the onion site, the user will be
> redirected to a site they can't use.
>
> Or if the user simply wants to use the non-onion site for some reason
> (maybe they want a link they can share with their non-onion friends,
> or maybe they don't want to reveal they're using Tor Browser).
>
> Users *must* have a way to disable the redirect on every redirect.

Right now, I don't agree with this. (I reserve the right to change my
mind.) Websites can already redirect users to broken links through
mistakes. Why is "my onion site is not running" a scenario we want to

Re: [tor-dev] Your input on the Tor Metrics Roadmap 2017/18

2017-10-06 Thread Tom Ritter
On 6 October 2017 at 04:48, Karsten Loesing  wrote:
>  - tasks we're missing or that we're listing as long-term goals (Q4/2018
> or later) that you think should have higher priority over the tasks we
> picked for the time until Q3/2018,

bwauth related things, such as:

- How much do bwauths agree?
- How much does geography affect the bwauth's measurements?
- How can we tell if a change, or new bwauth code, is producing good data?
- What is 'good data'?

>  - tasks that you'd want to contribute to in any way, in which case we
> might reach out to you when we start working on them.

I've begun, slowly, to try and answer some of those questions, but my
methodology is not as rigorous as yours would be.

And obviously I'll help out with any consensus-health stuff as you
need me/I'm able.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Are we planning to use the "package" mechanism?

2017-06-19 Thread Tom Ritter
On 16 June 2017 at 13:15, Roger Dingledine  wrote:
> On Fri, Jun 16, 2017 at 02:08:53PM -0400, Nick Mathewson wrote:
>> With proposal 227 in 0.2.6.3-alpha, we added a way for authorities to
>> vote on e.g. the latest versions of the torbrowser package.
>>
>> It appears we aren't actually using that, though.  Are we planning to
>> use it in the future?
>
> Last I checked, the authority operators were uncomfortable with the
> slippery slope of "everybody who has some sort of package sends us their
> filename and checksums", because then every Tor client and relay fetches
> that text every hour forever, and we could imagine that blob of text
> growing out of hand.
>
> That said, having the directory authorities vote about a checksum
> of a file, and that file contains all the things, and somebody else
> coordinates what goes in that file, how to handle name spaces in it,
> etc, sounds like it could be totally doable.
>
> That said, from the directory authority perspective, we would want to
> automate the process of voting about that file -- not have the authority
> operators manually check the file and change the sha256 every time
> somebody updates it.
>
> For example, we could wget the file and then put the checksum into our
> votes, thus giving some sort of primitive perspective-access-network
> style robustness.
>
> I don't know what this approach would do to the security assumptions
> from that proposal though.

This sounds like the right approach is a transparency log - the
authorities include a single hash; and it's the responsibility of all
interested packagers to put their submissions in the log...

-tom

On 16 June 2017 at 13:15, Roger Dingledine  wrote:
> On Fri, Jun 16, 2017 at 02:08:53PM -0400, Nick Mathewson wrote:
>> With proposal 227 in 0.2.6.3-alpha, we added a way for authorities to
>> vote on e.g. the latest versions of the torbrowser package.
>>
>> It appears we aren't actually using that, though.  Are we planning to
>> use it in the future?
>
> Last I checked, the authority operators were uncomfortable with the
> slippery slope of "everybody who has some sort of package sends us their
> filename and checksums", because then every Tor client and relay fetches
> that text every hour forever, and we could imagine that blob of text
> growing out of hand.
>
> That said, having the directory authorities vote about a checksum
> of a file, and that file contains all the things, and somebody else
> coordinates what goes in that file, how to handle name spaces in it,
> etc, sounds like it could be totally doable.
>
> That said, from the directory authority perspective, we would want to
> automate the process of voting about that file -- not have the authority
> operators manually check the file and change the sha256 every time
> somebody updates it.
>
> For example, we could wget the file and then put the checksum into our
> votes, thus giving some sort of primitive perspective-access-network
> style robustness.
>
> I don't know what this approach would do to the security assumptions
> from that proposal though.
>
> --Roger
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] maatuska's bwscanner down since 2017-04-14 -> significant drop in relay traffic

2017-04-20 Thread Tom Ritter
On 20 April 2017 at 10:09, Ian Goldberg  wrote:
> On Thu, Apr 20, 2017 at 10:54:21AM -, relayopera...@openmailboxbeta.com 
> wrote:
>> Hi Tom!
>> since maatuska's bwscanner is down [1] I see a significant drop of traffic 
>> on many of my relays, and I believe this is related.
>> Do you have any update to [2] on when maatuska will report bwscan results 
>> again?
>> thanks,
>> a concerned relayoperator
>
> I am also seeing a strange sudden drop in usage:
>
> https://atlas.torproject.org/#details/BCEDF6C193AA687AE471B8A22EBF6BC57C2D285E
>
> What's going on?


I can confirm that maatuska's bwauth has been down since last weekend
and that we've been unable to get the box back up. We're seeing if we
can stand up a new one. No ETA. Sorry for the minimal communication on
it, but haven't had a more detailed update.

To determine if the loss of maatuska's bwauth has been the particular
thing affecting a particular relay is labor intensive at this time:
you'd need to see if maatuska had been on the high-side-of-median for
your relay measured value resulting in a lower assignment when it went
away.  You could do this through historical consensus-health documents
(use the arrow on the detailed page next to your relay) or through
vote+consensus comparisons.  Not user friendly though.

We are working on better bwauth analysis tools in #21992, #21993,
#21994, and #21882 - but those are bwauths-as-a-whole, not individual
relay analysis.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Rethinking Bad Exit Defences: Highlighting insecure and sensitive content in Tor Browser

2017-04-06 Thread Tom Ritter
On 6 April 2017 at 07:53, Donncha O'Cearbhaill <donn...@donncha.is> wrote:
> Tom Ritter:
>> It seems reasonable but my first question is the UI. Do you have a
>> proposal?  The password field UI works, in my opinion, because it
>> shows up when the password field is focused on. Assuming one uses the
>> mouse to click on it (and doesn't tab to it from the username) - they
>> see it.
>>
>> How would you communicate this for .onion links or bitcoin text? These
>> fields are static text and would not be interacted with in the same
>> way as a password field.
>>
>> A link could indeed be clicked - so that's a hook for UX... A bitcoin
>> address would probably be highlighted for copying so that's another
>> hook... But what should it do?
>
> Thank you all for the suggestions in this thread. I agree that we need
> to tie down a preliminary UI. I'm seeing two key hooks that we could use:
>
> * Detecting navigation from an insecure page to an onion URL or
> bitcoin:// address.
> * Reading and alerting to Bitcoin or onion addresses in the clipboard
> buffer.
>
> I've been working on a proof-of-concept extension which implements both
> of these hooks.
>
> The "clipboardRead" permission is needed to read the contents of the
> clipboard from a Firefox extension. This was implemented in Firefox 54
> (2017-02-13) in Mozilla bug #1312260 [1]. Unfortunately it will be quite
> some time before Firefox 54 is included in an ESR release. The Mozilla
> patch for this permission is < 100 lines. Is this a feature that the TBB
> team might consider back-porting to Tor Browser?
>
> I agree with David, this UI should be as intrusive as possible to
> prevent users from shooting themselves in the foot. IMO navigation to
> onion URLs from HTTP should be completely blocked. I also think that we
> should wipe the users clipboard buffer if we detect a valid Bitcoin
> address in it.
>
> The UI could suggest that a user manually retypes the Bitcoin or onion
> address if they are certain that it is correct. I hope this type of
> intrusive warning will reduce risky behaviour and encourage any Tor
> related web services to move to TLS only.

[no hats]

Please no. Please give any sort of intrusive whatever I have to click
through but do not make me manually retype a bitcoin or onion address.
This is a usability nightmare, I would prefer you completely hide the
value entirely, so the user thinks it's a problem with the website
rather than hating Tor Browser.

Here's another idea besides click-through banners: using the
extension, create some sort of scratchpad that auto-populates the
bitcoin/onion address (and the user's Exit Node). Then reload the page
in a new circuit. Detect or prompt the user to compare them. If
they're the same, say "Phew, okay everything seems to be okay" and if
they're not, say "Jinkies! Would you consider pasting this information
in a bug report so we can investigate?"

Caveat: I don't know how common it is for HTTP websites with bitcoin
addresses to auto-generate payment addresses for privacy.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - Project "Crash Reporter for Tor Browser"

2017-04-02 Thread Tom Ritter
On 1 April 2017 at 09:22, Nur-Magomed <nmag...@gmail.com> wrote:
> Hi Tom,
> I've updated Proposal[1] according to your recommendations.
>
> 1) https://storm.torproject.org/grain/ECCJ3Taeq93qCvPJoWJkkY/

Looks good to me!

> 2017-03-31 19:46 GMT+03:00 Tom Ritter <t...@ritter.vg>:
>>
>> On 31 March 2017 at 10:27, Nur-Magomed <nmag...@gmail.com> wrote:
>> >> I think we'd want to enhance this form. IIRC the 'Details' view is
>> >> small and obtuse and it's not easy to review. I'm not saying we
>> >> _should_ create these features, but here are a few I brainstormed:
>> >
>> > Yes, actually that form only shows "Key: Value" list, we can break it
>> > down
>> > in several GroupBoxes which consist of grouped data field and checkboxes
>> > to
>> > include.
>> >
>> >> Let's try and avoid GDocs if you don't mind :)
>> >
>> > Sorry :) I already registered on storm, but I had no access to create.
>> > Thanks for review, I'll update proposal accordint to your requiments.
>>
>> No worries.
>>
>> > And question: could we throw Windows or MacOS or both versions from
>> > timeline, and develop them after summer?
>>
>> Yes, I think that's fine. I think getting one platform to completion
>> would be a great accomplishment and would lay the groundwork and
>> improve the momentum to getting the subsequent platforms there.
>>
>> -tom
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - Project "Crash Reporter for Tor Browser"

2017-03-31 Thread Tom Ritter
On 31 March 2017 at 10:27, Nur-Magomed  wrote:
>> I think we'd want to enhance this form. IIRC the 'Details' view is
>> small and obtuse and it's not easy to review. I'm not saying we
>> _should_ create these features, but here are a few I brainstormed:
>
> Yes, actually that form only shows "Key: Value" list, we can break it down
> in several GroupBoxes which consist of grouped data field and checkboxes to
> include.
>
>> Let's try and avoid GDocs if you don't mind :)
>
> Sorry :) I already registered on storm, but I had no access to create.
> Thanks for review, I'll update proposal accordint to your requiments.

No worries.

> And question: could we throw Windows or MacOS or both versions from
> timeline, and develop them after summer?

Yes, I think that's fine. I think getting one platform to completion
would be a great accomplishment and would lay the groundwork and
improve the momentum to getting the subsequent platforms there.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - Project "Crash Reporter for Tor Browser"

2017-03-30 Thread Tom Ritter
On 28 March 2017 at 16:22, Nur-Magomed  wrote:
> Hi, Georg,
> Thank you!
>
>> We should have a good user interface ready giving the user at least an
>> explanation on what is going on and a way to check what is about to be
>> sent.
>
> I've also thought about that, I suppose we could just put text explanations
> on Crash Reporter client UI form [1].

I think we'd want to enhance this form. IIRC the 'Details' view is
small and obtuse and it's not easy to review. I'm not saying we
_should_ create these features, but here are a few I brainstormed:

- A much bigger, more clear Details window with:
- The ability to include to exclude individual sections of the report
(for example, Hardware information would not be included by default,
but maybe we give the user the ability to include it)
- The ability to perform text searches for keywords of their choosing
to spot-check if they are present in the report

Just ideas.

> I've wrote the Proposal [2], could you review it and leave comments? Thanks.

Let's try and avoid GDocs if you don't mind :)

I put your document here:
https://storm.torproject.org/shared/DHc8GjUYr8aUNeO2ZcOjTc1xG3pwbburIQoLYB9wkAz
(I don't know if you can create storm documents, but you could use
pad.riseup.net ) and put comments on it.

> P.S. Have I to send proposal to GSoc as draft?

I don't know the answer to this, but hopefully Damian does?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Rethinking Bad Exit Defences: Highlighting insecure and sensitive content in Tor Browser

2017-03-28 Thread Tom Ritter
It seems reasonable but my first question is the UI. Do you have a
proposal?  The password field UI works, in my opinion, because it
shows up when the password field is focused on. Assuming one uses the
mouse to click on it (and doesn't tab to it from the username) - they
see it.

How would you communicate this for .onion links or bitcoin text? These
fields are static text and would not be interacted with in the same
way as a password field.

A link could indeed be clicked - so that's a hook for UX... A bitcoin
address would probably be highlighted for copying so that's another
hook... But what should it do?

-tom


On 28 March 2017 at 10:31, Donncha O'Cearbhaill  wrote:
> Hi all,
>
> The Tor bad-relay team regularly detects malicious exit relays which are
> actively manipulating Tor traffic. These attackers appear financial
> motivated and have primarily been observed modifying Bitcoin and onion
> address which are displayed on non-HTTPS web pages.
>
> Increasingly these attackers are becoming more selective in their
> targeting. Some attackers are only targeting a handful of pre-configured
> pages. As a result, we often rely on Tor users to report bad exits and
> the URLs which are being targeted.
>
> In Firefox 51, Mozilla started to highlight HTTP pages containing
> password form fields as insecure [1]. This UI clearly and directly
> highlights the risk involved in communicating sensitive data over HTTP.
>
> I'd like to investigate ways that we can extend a similar UI to Tor
> Browser which highlight Bitcoin and onion addressed served over HTTP. I
> understand that implementing this type of Bitcoin and onion address
> detection would be less reliable than Firefox's password field
> detection. However even if unreliable it could increase safety and
> increase user awareness about the risks of non-secure transports.
>
> There is certainly significant design work that needs to be done to
> implement this feature. For example, .onion origins need be treated as
> secure, but only if they don't included resources from non-secure
> origins. We would also need to make the onion/bitcoin address detection
> reliable against active obfuscation attempts by malicious exits.
>
> I'd like to hear any and all feedback, suggestions or criticism of this
> proposal.
>
> Kind Regards,
> Donncha
>
>
> [1]
> https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
>
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - Project "Crash Reporter for Tor Browser"

2017-03-20 Thread Tom Ritter
Hi Nur-Magomed,

Great to have you interested in this!

So we would want to use the Crash Reporter that's built into Mozilla
Firefox (which is called Breakpad, and is adapted from Chromium).  At
a high level, I would break down the project into the following
sections:

1) Get the crash reporter built (at all) in our toolchain. We
currently disable it and I know there will be at least one or two
hurdles to overcome here as we've never tried to built this on
Linux-for-Windows.  If you wish you could focus on a single platform
for this at a time (e.g. Linux) so you can move onto the next step.

2) Audit the crash reporter data and see what it is that gets
reported, when, and how. We'd want to err on the side of caution about
what we report in a dump. So we'd need to enumerate each field that
gets reported, get some samples of the data, and review if we'd want
to include it or not. We'd also want to review what prefs govern crash
submissions, how crashes get stored (which I think is on-disk next to
Tor Browser), and when they get reported.

3) Change the way they get reported. We absolutely cannot have crashes
sitting around on disk next to Tor Browser for the next time the user
starts the browser - no matter how much data we strip out of them. So
we'll need to brainstorm how we might try submitting them immediately
upon crash instead of next startup.

4) Get a submission server running. Mozilla has a ton of tools to
analyze crashes (https://crash-stats.mozilla.org/home/product/Firefox
is one and https://github.com/mozilla/socorro is the general backend).
We should look at Socorro and probably adapt it for use by Tor rather
than building our own.

5) Circle back and get the crash reporter built reproducibly, and for
all platforms. I put this one last because it may be the case that
there are annoying time-sinks here, and I think by doing this last
you'll be able to make the most headway on things that will take the
most time - like enumerating, documenting, and evaluating the fields;
and fiddling with Socorro.


This is my take on it - Georg may have additional thoughts.

-tom

On 20 March 2017 at 09:01, teor  wrote:
>
>
>> On 19 Mar 2017, at 19:02, Nur-Magomed  wrote:
>>
>> Hi!
>> I'm interesred with project "Crash Reporter for Tor Browser".
>> I'm working on that idea, but I need some specifications about how it should 
>> work, what kind of crash information we have to get and what technologies I 
>> can use on server side (for collect information).
>> ...
>
> Hi Nur-Magomed,
>
> I've cc'd the mentors for the Crash Reporter project on this email.
>
> Please be aware that we have a meeting this week and next week, so some
> of us are busy travelling and working together in person.
>
> T
>
> --
> Tim Wilson-Brown (teor)
>
> teor2345 at gmail dot com
> PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
> ricochet:ekmygaiu4rzgsk6n
> xmpp: teor at torproject dot org
> 
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Make Tor Browser Faster GSOC Project

2017-03-17 Thread Tom Ritter
On Fri, Mar 17, 2017 at 2:07 AM, Kartikey singh
 wrote:
> Hi I'm interested in Make Tor Browser Faster gsoc project. Please guide me
> for the same.

Hi Kartikey,

For Tor, the best place to discuss this is on the tor-dev mailing
list, which I've included. You should susbcribe and we can talk about
this there. As I understand it, student selections have not been made
for GSOC yet, so please don't take this as a guarantee that you'll be
able to be funded to work on this.

Anyway, the topic on the website is a bit ambiguous, so I've attempted
to flesh out the project more here:

https://storm.torproject.org/shared/URdVCz8eCbBfQzYwG3gaR-KuCvMTIS3zU7emq3AF7A3

I'd welcome input from the rest of the tor community on this as well.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Scheduling future Tor proposal reading groups

2016-11-29 Thread Tom Ritter
On 29 November 2016 at 13:55, teor  wrote:
>
> All of the above seem like a good idea.
>
>>  - prop273: Exit relay pinning for web services ?
>
> This got some negative feedback on the mailing list that I tend to agree with,
> the proposal should either be shelved, or heavily modified to address the
> client attacks it enables.
>
> (I'm not sure it's possible to modify it to address the attacks.)

+1 to teor's comments =)

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Proposal] A simple way to make Tor-Browser-Bundle more portable and secure

2016-10-30 Thread Tom Ritter
On Oct 29, 2016 12:52 PM, "Yawning Angel"  wrote:
>
> On Sat, 29 Oct 2016 11:51:03 -0200
> Daniel Simon  wrote:
> > > Solution proposed - Static link the Tor Browser Bundle with musl
> > > libc.[1] It is a simple and fast libc implementation that was
> > > especially crafted for static linking. This would solve both
> > > security and portability issues.
>
> This adds a new security issue of "of all the things that should
> have ASLR, it should be libc, and it was at one point, but we started
> statically linking it for some stupid reason".

If this is accurate, that statically linking will enable pre-built rop
chains because libc is at a predictable memory address, I would strongly
oppose it for this reason alone.

It would be a major step backwards in security.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Proposal] A simple way to make Tor-Browser-Bundle more portable and secure

2016-10-29 Thread Tom Ritter
On May 9, 2016 9:15 AM, "Daniel Simon"  wrote:
>
> Hello.
>
> How it's currently done - The Tor Browser Bundle is dynamically linked
> against glibc.
>
> Security problem - The Tor Browser Bundle has the risk of information
> about the host system's library ecosystem leaking out onto the
> network.

So I'm not a libc expert, would you be willing to unpack this for me and
explain what sorts of data can leak and how? It seems to me that it would
require some high amount of attacker control - control of arguments to
functions, inspecting memory layout, or code execution...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] handling TLS Session Ticket/Identifier for Android

2016-10-24 Thread Tom Ritter
The info I gave you was for Tor Browser, the the latter (about session
ID) is actually wrong. TBB disables both.

https://trac.torproject.org/projects/tor/ticket/20447#ticket
https://gitweb.torproject.org/tor-browser.git/tree/security/manager/ssl/nsNSSComponent.cpp?h=tor-browser-45.4.0esr-6.5-1#n724

Also: https://trac.torproject.org/projects/tor/ticket/4099

Core Tor also disables both also AFAICT:
https://gitweb.torproject.org/tor.git/commit/?id=8743080a289a20bfaf0a67d6382ba0c2a6d6534d
https://gitweb.torproject.org/tor.git/tree/src/common/tortls.c#n1164

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 274: A Name System API for Tor Onion Services

2016-10-10 Thread Tom Ritter
The minorest of comments.

On 7 October 2016 at 15:06, George Kadianakis  wrote:
>For example here is a snippet from a torrc file:
>OnionNamePlugin 0 .hosts  /usr/local/bin/local-hosts-file
>OnionNamePlugin 1 .zkey   /usr/local/bin/gns-tor-wrapper
>OnionNamePlugin 2 .bit/usr/local/bin/namecoin-tor-wrapper
>OnionNamePlugin 3 .scallion   /usr/local/bin/community-hosts-file
>
> 2.3.1. Tor name resolution logic
>
>When Tor receives a SOCKS request to an address that has a name
>plugin assigned to it, it needs to perform a query for that address
>using that name plugin.
>
>If there are multiple name plugins that correspond to the requested
>address, Tor queries all relevant plugins sorted by their priority
>value, until one of them returns a successful result. If two plugins
>have the same priority value, Tor MUST abort.

If priority is only consulted when two tlds have the same value, why
are they required to all be different?  (I guess to just make it
simpler reading file?)

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 273: Exit relay pinning for web services

2016-10-06 Thread Tom Ritter
I think directing users to an onion service would be significantly
simpler and better in several regards. Aside from the 'onion severs
can't get DV SSL certs' problem are there others Yawning or I have not
mentioned?


As far as the proposal goes itself, I agree with Roger that the
problem of user partitioning is pretty scary, and the 'network
observation' approach is not a strong solution.

I like Roger's idea shipping pinning documents inside Tor Browser. As
I mentioned in the tbb-dev thread, I think a OneCRL-like method would
be a good solution for updating. Specifically Mozilla Kinto:
https://wiki.mozilla.org/Firefox/Kinto which is designed for this.

Browsers currently have a problem with HPKP preloads wrt expiration
and bricking. (As we saw recently!) Updating _that_ mechanism to use
Kinto also might improve security and usability there as well, without
impacting user partitioning either! And it might be something Mozilla
is interested in doing themselves (meaning we don't need to build it.)
 If we got to this before them, we could always ship a static preload
list per-version.

I have some comments on the draft itself, but the above higher-level
ones take precedence.

On 5 October 2016 at 15:09, Philipp Winter  wrote:
> 2. Design
>
> 2.1 Overview
>
>A simple analogy helps in explaining the concept behind exit relay
>pinning: HTTP Public Key Pinning (HPKP) allows web servers to express
>that browsers should pin certificates for a given time interval.
>Similarly, exit relay pinning (ERP) allows web servers to express
>that Tor Browser should prefer a predefined set of exit relays.  This
>makes it harder for malicious exit relays to be selected as last hop
>for a given website.
>
>Web servers advertise support for ERP in a new HTTP header that
>points to an ERP policy.  This policy contains one or more exit
>relays, and is signed by the respective relay's master identity key.
>Once Tor Browser obtained a website's ERP policy, it will try to
>select the site's preferred exit relays for subsequent connections.
>The following subsections discuss this mechanism in greater detail.


HSTS and HPKP include a 'preload' mechanism to bake things into the
browser. I think TB would need the same thing, at a minimum, in
addition to the header approach.


> 2.2 Exit relay pinning header
>
>Web servers support ERP by advertising it in the "Tor-Exit-Pins" HTTP
>header.  The header contains two directives, "url" and "max-age":
>
>  Tor-Exit-Pins: url="https://example.com/pins.txt;; max-age=2678400
>
>The "url" directive points to the full policy, which MUST be HTTPS.
>Tor Browser MUST NOT fetch the policy if it is not reachable over
>HTTPS.  Also, Tor Browser MUST abort the ERP procedure if the HTTPS
>certificate is not signed by a trusted authority.  The "max-age"
>directive determines the time in seconds for how long Tor Browser
>SHOULD cache the ERP policy.
>
>After seeing a Tor-Exit-Pins header in an HTTP response, Tor Browser
>MUST fetch and interpret the policy unless it already has it cached
>and the cached policy has not yet expired.
>
> 2.3 Exit relay pinning policy
>
>An exit relay pinning policy MUST be formatted in JSON.  The root
>element is called "erp-policy" and it points to a list of pinned exit
>relays.  Each list element MUST contain two elements, "fingerprint"
>and "signature".  The "fingerprint" element points to the
>hex-encoded, uppercase, 40-digit fingerprint of an exit relay, e.g.,
>9B94CD0B7B8057EAF21BA7F023B7A1C8CA9CE645.  The "signature" element
>points to an Ed25519 signature, uppercase and hex-encoded.  The
>following JSON shows a conceptual example:
>
>{
>  "erp-policy": [
>"start-policy",
>{
>  "fingerprint": Fpr1,
>  "signature": Sig_K1("erp-signature" || "example.com" || Fpr1)
>},
>{
>  "fingerprint": Fpr2,
>  "signature": Sig_K2("erp-signature" || "example.com" || Fpr2)
>},
>...
>{
>  "fingerprint": Fprn,
>  "signature": Sig_Kn("erp-signature" || "example.com" || Fprn)
>},
>"end-policy"
>  ]
>}
>
>Fpr refers to a relay's fingerprint as discussed above.  In the
>signature, K refers to a relay's master private identity key.  The ||
>operator refers to string concatenation, i.e., "foo" || "bar" results
>in "foobar".  "erp-signature" is a constant and denotes the purpose
>of the signature.  "start-policy" and "end-policy" are both constants
>and meant to prevent an adversary from serving a client only a
>partial list of pins.
>
>The signatures over fingerprint and domain are necessary to prove
>that an exit relay agrees to being pinned.  The website's domain --
>in this case example.com -- is part of the signature, so third
>parties such as evil.com cannot coerce 

Re: [tor-dev] Tor Browser downloads and updates graphs

2016-09-12 Thread Tom Ritter
On 12 September 2016 at 03:37, Rob van der Hoeven
 wrote:
> One thing bothers me. The update requests graph never touches zero. It
> should, because that would mean that all Tor browsers have been updated.
> 100.000 seems to be the lowest value.

I'm not surprised by this at all. I think a very common mode of usage
is people who have TB on their computer but don't use it regularly. (I
have several friends like this.) Only when they want to search for
something 'embarrassing' (medical conditions, etc) will they use it.
With an update cycle of one-two months between releases, it's likely
these people are actually _never_ up to date (unless they choose to
restart TB during their browsing session.)

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Adding depictor/stem to Jenkins

2016-07-05 Thread Tom Ritter
On 5 July 2016 at 14:34, Damian Johnson  wrote:
> Hi Tom, just food for thought but another option would be a cron task
> that pulls the repos and runs that if there's a change. That's what I
> do for stem's website so it reflects the changes I push.

I think that's a good model for webpages-backed-by-git, but less so
for integration/software testing.

I could probably rig something up like that locally, but I'd have to
make it detect a crash and email me. The advantage of jenkins is it
will do that automatically, on infrastructure I don't have to worry
about maintaining[0], as well as making the crash details public.
There's a chance it might even detect a break in stem.

-tom

[0] A problem I've had a lot is that my 'email me stuff' scripts will
stop emailing... and I never find out they're failing!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bridge Directory Consensus

2016-06-07 Thread Tom Ritter
Have you checked the data directory of the Bright Authority?  I think
the data is in a file called networkstatus-bridges ?

-tom

On 7 June 2016 at 09:39, Nicholas R. Parker (RIT Student)
 wrote:
> I've got a quick question for you all.
> I have a functioning bridge directory authority and a bridge that's
> presumably pushing its descriptor to the authority (presents no error
> messages at all).
> My question is this: How do I confirm an update to the "bridge master list"
> on the authority? It doesn't appear on the general
> ip/tor/status-vote/consensus page, but that would make sense.
>
> Is there a bridge list generated to hold the descriptors?
>
> Nicholas R. Parker
> Rochester Institute of Technology
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [::]/8 is marked as private network, why?

2016-03-29 Thread Tom Ritter
On 29 March 2016 at 02:29, Sebastian Hahn  wrote:
> I've been wondering about the private_nets const in src/or/policies. It
> was added in a96c0affcb4cda1a2e0d83d123993d10efc6e396 but Nick doesn't
> remember why, and I'm hoping someone has an idea (maybe teor, who I've
> CCed here, who documented this in a later commit?). If nobody knows why
> we do this I think we should remove it as likely incorrect.

::/8 is Reserved by the IETF, it is (a superset of) the deprecated
space for "IPv4 Compatible IPv6 Addresses".  The addresses are not to
be reassigned for any other purposes.

Authoritative source:
http://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml

I'm not necessarily sure what private networks are all used for in
Tor, but maybe this explains it and makes sense?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] How to build a Router that will only allow Tor users

2016-03-15 Thread Tom Ritter
On 15 March 2016 at 10:52, Martin Kepplinger  wrote:
> Hi,
>
> I try to configure OpenWRT in a way that it will only allow outgoing
> connections if it is Tor. Basically it is the opposite of "blacklisting
> exit relays on servers": "whitelisting (guard) relays for clients". It
> should *not* run Tor itself.
>
> A first test setup (onionoo document, ipset and iptables) kind of
> worked. It's definitely doable, but not totally trivial in the end.
>
> What did *not* work, was starting Torbrowser. That's a hard requirement,
> and before bebugging it through I ask: Do I miss something when I just
> allow outgoing connections to
>
>  * Guard,
>  * Authority,
>  * and HSDir flagged relays (do I *need* them? that's a different
> question probably)

Well it won't work with bridges obviously, including the hardcoded
ones in TBB...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Set up Tor private network

2016-02-25 Thread Tom Ritter
On 25 February 2016 at 21:00, SMTP Test  wrote:
> Hi all,
>
> I try to set up a Tor private network. I found two tutorials online
> (http://liufengyun.chaos-lab.com/prog/2015/01/09/private-tor-network.html
> and https://ritter.vg/blog-run_your_own_tor_network.html) but seems that
> they both are outdated. Could anyone please give me a tutorial or some hints
> on building a private Tor network?

Can you explain what you ran into that was outdated or wasn't working?
 While time marches on and tor is not quite the same as when I wrote
that - I'm not sure what would have completely broken since then...

> Another question is: what is the minimum
> number of required directory authorities for a private Tor network? I am
> wondering if one directory authority is enough.

I never tested with 1. I know 3 works.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Summary of meek's costs, October 2015

2015-11-20 Thread Tom Ritter
On 18 November 2015 at 16:32, David Fifield  wrote:
> There was an unfortunate outage of meek-amazon (not the result of
> censorship, just operations failure). Between 30 September and 9 October
> the bridge had an expired HTTPS certificate.
> [tor-talk] Outage of meek-amazon
> 
> https://lists.torproject.org/pipermail/tor-talk/2015-October/039231.html
> 
> https://lists.torproject.org/pipermail/tor-talk/2015-October/039234.html
> And then, as a side effect of installing a new certificate, the bridge's
> fingerprint changed, which caused Tor Browser to refuse to connect. It
> used to be that we didn't include fingerprints for the meek bridges, but
> now we do, so we didn't anticipate this error and didn't notice it
> quickly.
> Update the meek-amazon fingerprint to 
> B9E7141C594AF25699E0079C1F0146F409495296
> https://trac.torproject.org/projects/tor/ticket/17473
> [tor-talk] Changed fingerprint for meek-amazon bridge (attn support)
> 
> https://lists.torproject.org/pipermail/tor-talk/2015-November/039397.html
> Interestingly, the meek-amazon bridge still had about 400 simultaneous
> users (not as much as normal) during the time when the fingerprint
> didn't match. I would have expected it to go almost to zero. Maybe it's
> people using an old version of Tor Browser (from before March 2015) or
> some non–Tor Browser installation.

It seems like it would be better to use the SPKI rather than the cert
fingerprint, this would allow you to reissue the same key and keep
things working for older clients.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread Tom Ritter
[+tor-dev]

So... weird. I dug into Onyx primarily. No, in scanner.1/scan-data I
cannot find any evidence of Onyx being present.  I'm not super
familiar with the files torflow produces, but I believe the bws- files
list what slice each relay is assigned to.  I've put those files
(concatted) here: https://bwauth.ritter.vg/bwauth/bws-data

Those relays are indeed missing.

Mike: is it possible that relays are falling in between _slices_ as
well as _scanners_?  I thought the 'stop listening for consensus'
commit would mean that for a single scanner would use the same
consensus for all the slices in the scanner...

-tom

[0] 
https://gitweb.torproject.org/torflow.git/commit/NetworkScanners/BwAuthority?id=af5fa45ca82d29011676aa97703d77b403e6cf77

On 5 November 2015 at 10:48,  <starlight.201...@binnacle.cx> wrote:
> Hi Tom,
>
> Scanner 1 finally finished the first pass.
>
> Of the list of big relays not checked
> below, three are still not checked:
>
> *Onyx   10/14
>  atomicbox1 10/21
> *naiveTorer 10/15
>
> Most interesting, ZERO evidence of
> any attempt to use the two starred
> entries appears in the scanner log.
> 'atomicbox1' was used to test
> other relays but was not tested
> itself.
>
> Can you look in the database files
> to see if any obvious reason for
> this exists?  These relays are
> very fast, Stable-flagged relays
> that rank near the top of the
> Blutmagie list.
>
>
>
>
>
>>Date: Thu, 29 Oct 2015 19:57:52 -0500
>>To: Tom Ritter <t...@ritter.vg>
>>From: starlight.201...@binnacle.cx
>>Subject: Re: stale entries in bwscan.20151029-1145
>>
>>Tom,
>>
>>Looked even more closely.
>>
>>I flittered out all relays that are
>>not currently active, ending up with
>>a list of 6303 live relays.
>>
>>1065 or 17% of them have not be
>>updated for five or more days,
>>292 or 4% have not been updated
>>for ten days, and 102 or 1%
>>have not been updated for 15
>>days.
>>
>>In particular I know of a very fast
>>high quality relay in a CDN-grade
>>network that has not been measured
>>in 13 days.  My relay Binnacle
>>is a well run relay in the
>>high-quality Verizon FiOS network
>>and has not been measured for 10 days.
>>
>>This does not seem correct.
>>
>>
>>P.S. Here is a quick list of some
>>top-30 relays that have have been
>>seriously neglected:
>>
>>redjohn110/9
>>becks   10/15
>>aurora  10/20
>>Onyx10/14
>>IPredator   10/15
>>atomicbox1  10/21
>>sofia   10/14
>>naiveTorer  10/15
>>quadhead10/12
>>3cce3a91f6a625  10/13
>>apx210/14
>>
>>
>>
>>>At 13:35 10/29/2015 -0400, you wrote:
>>>
>>>>The system is definetly active.  . . .the most recent file has ten day old 
>>>>entries?
>>>
>>>Just looked more closely.  About 2500
>>>of 8144 lines (30%) have "updated_at=" more
>>>than five days ago or 2015/10/24 00:00 UTC.
>>>
>>>Seems like something that should have
>>>an alarm check/monitor.
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread Tom Ritter
Talked with Mike on IRC:

12:12 < tjr:#tor-dev> mikeperry: If you have a moment today, we'd
appreciate it if you could peek at the tor-dev thread 'stale entries
in bwscan.20151029-1145'
12:14 < mikeperry:#tor-dev> that seems to be one of the mails I lost
12:14 < mikeperry:#tor-dev> (had a mail failure a couple weeks back)
12:14 < mikeperry:#tor-dev> oh, nm, it just hasn't arrived yet
12:16 < mikeperry:#tor-dev> tjr: torflow does indeed fetch a new
consensus for each slice now. they could be falling in between them :/
12:16 < mikeperry:#tor-dev> but the unmeasured scanner didn't pick them up even?
12:17 < tjr:#tor-dev> They are measured
12:17 < tjr:#tor-dev> they're big fast relays
12:18 < tjr:#tor-dev> Hm.  Conceptually, do you see a problem with
locking a single consensus at the startup of a scanner?
12:24 < mikeperry:#tor-dev> tjr: it made them finish much faster,
since they didn't have to keep churning and spending CPU and making
additional measurements as relays come in and
out, but I was wondering if it would make
the gap problem worse
12:26 < mikeperry:#tor-dev> it seems odd that three relays would be
missed by all scanners though. I wonder what is special about them
that is causing them to fall through the
cracks for everyone for so long
12:26 < tjr:#tor-dev> Wait I'm confused. When you say "it" you mean
fetching a new consensus every slice, right?  Why would fetching a new
consensus every slice use _less_ CPU and
  do less churning. It seems that _would_ cause
new relays come in and out and make the gap problem worse
12:27 < mikeperry:#tor-dev> tjr: because what the code used to do was
listen for new consensus events, and dynamically update the slice and
the relays as the consensus came in
12:27 < tjr:#tor-dev> So these 3 should be covered by scanner1. They
were skipped, and I'm theorizing because they fell through gaps in the
slices inside scanner1
12:27 < mikeperry:#tor-dev> that would mean that every new consensus
period, the scanning machine would crawl to a stop, and also that
relays would shift around in the slice during
that time
12:28 < tjr:#tor-dev> Okay, yea dynamically updating the slice in the
middle of the slice definetly sounds bad.
12:28 < tjr:#tor-dev> I'm proposing pushing it back even further -
instead of a new consensus each slice, lock the consensus at the
beginngin of a scanner for all slices
12:28 < mikeperry:#tor-dev> that is harder architecturally because of
the process model
12:29 < mikeperry:#tor-dev> though maybe we could have the
subprocesses continue on for multiple slices

So them falling between the slices would be my best guess.  The
tedious way to confirm it would be to look at the consensus at the
times each slice began (in bws-data), match up the slice ordering, and
confirm that (for all N) when slicenum=N began Onyx was expected to be
in slicenum=Not-N

-tom

On 5 November 2015 at 11:11, Tom Ritter <t...@ritter.vg> wrote:
> [+tor-dev]
>
> So... weird. I dug into Onyx primarily. No, in scanner.1/scan-data I
> cannot find any evidence of Onyx being present.  I'm not super
> familiar with the files torflow produces, but I believe the bws- files
> list what slice each relay is assigned to.  I've put those files
> (concatted) here: https://bwauth.ritter.vg/bwauth/bws-data
>
> Those relays are indeed missing.
>
> Mike: is it possible that relays are falling in between _slices_ as
> well as _scanners_?  I thought the 'stop listening for consensus'
> commit would mean that for a single scanner would use the same
> consensus for all the slices in the scanner...
>
> -tom
>
> [0] 
> https://gitweb.torproject.org/torflow.git/commit/NetworkScanners/BwAuthority?id=af5fa45ca82d29011676aa97703d77b403e6cf77
>
> On 5 November 2015 at 10:48,  <starlight.201...@binnacle.cx> wrote:
>> Hi Tom,
>>
>> Scanner 1 finally finished the first pass.
>>
>> Of the list of big relays not checked
>> below, three are still not checked:
>>
>> *Onyx   10/14
>>  atomicbox1 10/21
>> *naiveTorer 10/15
>>
>> Most interesting, ZERO evidence of
>> any attempt to use the two starred
>> entries appears in the scanner log.
>> 'atomicbox1' was used to test
>> other relays but was not tested
>> itself.
>>
>> Can you look in the database files
>> to see if any obvious reason for
>> this exists?  These relays are
>> very fast, Stable-flagged relays
>> that rank near the top of the
>> Blutmagie list.
>>
>>
>>
>>
>>
>>>Date: Thu, 29 Oct 2015 19:57:52 -0500
>>>To: Tom Ritter <t...@ritter.vg>
>>&

Re: [tor-dev] Proposal 258: Denial-of-service resistance for directory authorities

2015-11-05 Thread Tom Ritter
On 29 October 2015 at 11:25, Nick Mathewson  wrote:
>There are two possible ways a new connection to a directory
>authority can be established, directly by a TCP connection to the
>DirPort, or tunneled inside a Tor circuit and initiated with a
>begindir cell.  The client can originate the former as direct
>connections or from a Tor exit, and the latter either as fully
>anonymized circuits or one-hop links to the dirauth's ORPort.

Relays fetch the consensus from a V2Dir. Thus there is no risk that an
attacker can prevent an exit from fetching a consensus by (trying to)
DOS the DirAuths through it. I believe that's correct, just wanted to
say it out loud and let everyone confirm I guess.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stale entries in bwscan.20151029-1145

2015-11-05 Thread Tom Ritter
On 5 November 2015 at 16:37,  <starlight.201...@binnacle.cx> wrote:
> At 11:47 11/5/2015 -0600, Tom Ritter wrote:
>> . . .
>>So them falling between the slices would be my
>>best guess. . .
>
> Immediately comes to mind that dealing
> with the changing consensus while
> scanning might be handled in a different
> but nonetheless straightforward manner.
>
> Why not create a snapshot of the consensus
> at the time scanning commences then--
> without disturbing the scanners--
> pull each new update with an asynchronous
> thread or process.  The consensus thread
> would diff against the previous state
> snapshot and produce an updated snapshot
> plus deltas for each scanner and/or slice
> as the implementation requires.  Then
> briefly lock the working list for each
> active scanner and apply the delta to it.
>
> By having a single thread handle
> consensus retrieval and sub-division,
> issues of "lost" relays should
> go away entirely.  No need to hold
> locks for extended periods.
>
> The consensus allocation thread would
> run continuously, so individual slices
> and scanners can complete and restart
> asynchronous to each other without
> glitches or delays.
>
> Consensus allocation worker could also
> manage migrating relays from one scanner
> to another, again preventing lost
> relays.

So I'm coming around to this idea, after spending an hour trying to
explain why it was bad. I thought "No no, let's do this other
thing..." and then I basically designed what you said.  So the main
problem as I see it is that it's easy to move relays between slices
that haven't happened yet - but how do you do this when some slices
are completed and some aren't?

Relay1 is currently in scanner2 slice2, but the new consensus came in
and it should be in scanner1 slice 14.  Except scanner2slice2 was
already measured and scanner1slice14 has not been.  What do you do?

Or the inverse.  Relay2 is currently in scanner1 slice14. But the new
consensus says it should be in scanner2 slice 2.  But scanner2slice2
was already measured, and scanner1slice14 had not been.  You can only
move a relay between two slices that have yet to be measured.  But
everything is 'yet to be measured' unless you're going to halt the
whole bwauth after one entire cycle and then start over again.

Which... if you used a work queue instead of a scanner, might actually work...?

We could make a shared work queue of slices, and do away with the idea
of 'separate scanners for differerent speeded relays'... When there's
no more work, we would get a new consensus and make a new work queue
off of that.  We would assign work items in a scanner-like pattern,
and as we get new consensuses with new relays that weren't in any
slices, just insert them into existing queued work items. (Could also
go through and remove missing relays too.)

Moving new relays into the closest-matching slice isn't hard, and
swapping relays between yet-to-be-retrieved slices isn't that hard
either.  The pattern to select work items is now the main source of
complexity - it needs to estimate how long it takes a work item to
complete, and give out work items such that it always keeps some gaps
around to insert new relays into that aren't _too_ far away from that
relay's speed.  (Which is basically what the scanner separation was
set up for.)  It could also fly off the rails by going "Man these fast
relays take forever to measure, let's just give out 7 work items of
those" - although I'm not sure how bad that would be. Needs a
simulator maybe.



FWIW, looking at
https://bwauth.ritter.vg/bwauth/AA_scanner_loop_times.txt , it seems
like (for whatever weird reason) scanner1 took way longer than the
others. (Scanner 9 is very different, so ignore that one.)

Scanner 1
 5 days, 11:07:27
Scanner 2
 3 days, 19:00:03
Scanner 3
 2 days, 19:48:15
 2 days, 9:36:13
Scanner 4
 2 days, 18:42:21
 2 days, 19:41:16
Scanner 5
 2 days, 13:21:20
 2 days, 11:20:53
Scanner 6
 2 days, 20:19:48
 2 days, 13:46:30
Scanner 7
 2 days, 9:04:49
 2 days, 12:50:34
Scanner 8
 2 days, 14:31:50
 2 days, 15:05:28
Scanner 9
 20:29:42
 20:52:32
 14:42:08
 13:59:27
 10:25:06
 9:27:27
 9:52:41
 9:52:36
 14:52:38
 15:09:23
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bridge Guards (prop#188) & Bridge ORPort Reachability Tests

2015-09-10 Thread Tom Ritter
On 10 September 2015 at 02:01, isis  wrote:
> 2.a. First, if there aren't any other reasons for self-testing: Is Bridge
>  reachability self-testing actually helpful to Bridge operators in
>  practice?  Don't most Bridge operators just try to connect, as a
>  client, to their own Bridge to see if it's working correctly?  (This
>  is what I usually do, at least…)

Yes it is helpful; no I don't/can't always do that.  IPv6 is one
issue*. Finding the correct incantation is another**.  =)

-tom

* Yes I know I can get free IPv6 tunnels from tunnelbroker, no I don't
want to; it is complicated.
** See my email from earlier this summer about piecing everything
together to test a bridge.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hash Visualizations to Protect Against Onion Phishing

2015-08-21 Thread Tom Ritter
On 20 August 2015 at 09:24, Jeff Burdges burd...@gnunet.org wrote:

 I first learned about key poems here :
 https://moderncrypto.org/mail-archive/messaging/2014/000125.html
 If one wanted a more language agnostic system, then one could use a
 sequence of icons, but that's probably larger than doing a handful of
 languages.

That led to the attempt to run a usability study on text
representations, which kind of fizzled out:
https://github.com/tomrittervg/crypto-usability-study

The visual systems that are implemented that I'm aware of are:
 - SSH art of course
 - Identicons: 
http://haacked.com/archive/2007/01/22/Identicons_as_Visual_Fingerprints.aspx/
 - Monsters: http://www.splitbrain.org/projects/monsterid
 - Wavatars: http://www.shamusyoung.com/twentysidedtale/?p=1462
 - Unicorns (really):
http://meta.stackoverflow.com/questions/37328/my-godits-full-of-unicorns

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] collector problems since 2015-08-07 18:00?

2015-08-08 Thread Tom Ritter
In the event of collector missing data, there are (at least) two backup
instances. One is at bwauth.ritter.vg - no website, just files.

Does that have the same issue?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] BOINC-based Tor wrapper

2015-07-20 Thread Tom Ritter
On 19 July 2015 at 20:11, Serg std.s...@gmail.com wrote:
 The basic idea is that users running preconfigured secure server. BOINC
 downloads its as virtual machine image.
 Virtual machine gives secure sandbox to run relay.

I've set up and run BOINC tasks before.  Unless something has fairly
significantly changed, BOINC is really designed for discrete tasks.
Get some data, do some processing/networking stuff, upload the result.
Tor is designed as an always-running daemon - it never 'completes',
and it doesn't chunk well.  While relay capacity is always welcome,
main forms of participation (Guards, HSDirs, being in stable circuits)
require running for an extended period of time with very good uptime.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor + Apache Traffic Server w/ SOCKS - works now!

2015-05-05 Thread Tom Ritter
On 5 May 2015 at 15:30, CJ Ess zxcvbn4...@gmail.com wrote:
 I think we have differing goals, however your or-ctl-filter is very cool and
 I think I will need to add it to my stack.

Could expand a bit about what function you use ATS for and what the
benefits you get out of it are?  I'm familiar with ATS, but I'm just
not connecting the dots to understand why you're using it to sit
between your browser and tor...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Draft of proposal Direct Onion Services: Fast-but-not-hidden services

2015-04-15 Thread Tom Ritter
On 10 April 2015 at 07:58, George Kadianakis desnac...@riseup.net wrote:
 One negative aspect of the above suggestions, is that if hidden
 services only listen for connections, then they lose their
 NAT-punching abilities. But I bet that this is not a problem for some
 use cases that would appreciate the corresponding performance boost.
 Theoretically, the above optimizations could also be optional.

 We should think more.

I wobble back and forth on NAT-Punching for DOS (Direct Onion Services ;) ).

On one hand it's awesome to not have to worry about NAT. On the other,
if you're doing to run a DOS, presumably you are capable enough to
either traverse it or not have to worry about it?

Ultimately, I think I fall onto the side of wanting to keep the
NAT-punching present. As we support more use cases for OSes (Onion
Services ;) ) we will probably want to support people behind NATs but
willing to compromise anonymity for performance. For example, I could
easily envision someone being the DOS in an audio or video chat and
being behind a NAT. The DOS end helps boost the performance, the
client gets anonymity, everyone gets end-to-end protection.

The difference between a DOS connecting to an IP and a DOS _being_ the
IP seems small, as it's only used for connection setup.  Obviously
being the IP would be faster... perhaps the DOS could choose IPs that
(it believes) it has a low latency connection to? (If that would be
feasible with the information the daemon has available to it?)

On 9 April 2015 at 22:33, Jacob H. Haven ja...@jhaven.me wrote:
 Removing rendezvous nodes is a bigger change to the protocol, but would be
 very useful. Why not enable the client to just directly establish circuit(s)
 directly to the onion service? As David pointed out, this would probably be
 implemented most cleanly with a tag in the HSDir descriptor (this would
 conveniently identify the service as non-hidden, whether that's a good or
 bad thing...)

|direct onion service| - RP - client middle - client guard - 
 |client|

If the RP is removed and the client makes a direct connection to the
DOS, the client is using a two-hop circuit, not a three-hop, right?
If it's a three-hop circuit, it's no different (performance-wise) than
using a RP, right?



Another thought. If the DOS makes a direct connection to the HSDir for
descriptor publishing the HSDir will be able to (passively) enumerate
which HSes are DOSes, right?  This seems like it would be something we
would want to prevent. (Well, at least require the HSDir to go perform
an active fingerprint to learn this information.) The use of three-hop
circuits to publish this information is not strenuous either.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Renaming arm

2015-03-12 Thread Tom Ritter
Does it backronym to anything? Can it? ;)

-tom
On Mar 10, 2015 11:45 AM, Damian Johnson ata...@torproject.org wrote:

 Hmmm, thread about something as squishing and infinitely debatable as
 a name. What could go wrong? But before you get excited I've already
 picked one, this is just to sanity check with the community that I'm
 not making a stupid mistake... again.

 Five years ago when I started arm [1] I had the following criteria for
 picking its name...

   1. It needs to be short. This is a terminal command so anything
 lengthy is just begging people to alias it down.

   2. It should be reasonably memorable. Sorry, no 'z62*'.

   3. No naming conflict with other technical things. Body parts though
 are fair game though.

 Clearly the name failed at #3. When I picked it the ARM processor was
 just barely becoming a thing, but time marched on and now every irc
 discussion goes...

   new_person: New relay operator here, any tips?
   somebody: Hey, try arm!
   new_person: The processor? I'm so confused now.
   somebody: Nope, something completely different. It's a confusing name.

 I'm rewriting the whole bloody codebase so why not fix this along the way?

 So without further ado the name I've picked is 'Seth'. It's easy to
 type (#1), memorable (#2), and from what I can tell no SETH processors
 are on the horizon. I've reserved the name on PyPI, and searchs on
 wikipedia looks fine (most interesting match is the Egyptian god [2],
 which is actually kinda neat).

 So now just a final sanity check with you, our wonderful community.
 Any strong reasons to pick something else? Nothing is set in stone yet
 so still open to alternatives.

 Cheers! -Damian


 [1] https://www.atagar.com/arm/
 [2] https://en.wikipedia.org/wiki/Set_%28mythology%29
 ___
 tor-dev mailing list
 tor-dev@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Two TOR questions

2015-03-10 Thread Tom Ritter
On 10 March 2015 at 11:22, John Lee iratemon...@gmx.com wrote:
 For devs,

 1) Where can I get a previous version of Tor Bundle for Windows? I'm looking
 for the version when it jumped from Firefox 24 ESR (or something below
 Firefox 28.0) to the new Firefox GUI that occurred when going above version
 28.0

https://archive.torproject.org/tor-package-archive/torbrowser/ I can't
tell you which version, but you can find it here.


 2) How involved would it be to use a current version of TOR (like 4.0.4,
 etc) but match it with a Firefox 28.0 version or lower? (if I wanted to do
 it for my own use only)

4.0.4 is the version of the entire Tor Browser Bundle archive.  You
could fairly easily use an old Firefox version, and point it at a
local little-t tor instance but you would not be running Tor Browser,
and would lose the fingerprinting, unlinkability, and proxy bypass
defenses.  You could also use an old version of Tor Browser Bundle
that used an old version of Firefox, and point it at a recent version
of little-t tor, but you'd lose the protections where you need it the
most.

In general, running old versions of FF/TBB is not recommended.  Maybe
you could elaborate on why you want to, so we can understand why the
current version doesn't work for you?

 3) Why is the little orion icon missing in TOR 4.0.4 bundle for Windows? Now
 Tor bundle looks exactly the same as regular Firefox and Firefox GUI looks
 like Chrome. There is very little visual differentiation left and this is a
 bit of a concern that I might accidentally cross containmenate.

Sounds like bug I'm not familiar with...

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Best way to client-side detect Tor user without using check.tpo ?

2015-02-07 Thread Tom Ritter
On 7 February 2015 at 06:59, Fabio Pietrosanti (naif) - lists
li...@infosecurity.ch wrote:
 There's a right way to detect if a user it's on Tor, from a Browser,
 without loading an external network resource?

Is the javascript client loaded from a remote website?  If so, what
about embedding the user's remote IP and a list of Tor Exits into a
script and comparing them clientside?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [tor-assistants] Researching Tor for Master's Thesis

2014-11-26 Thread Tom Ritter
On 26 November 2014 at 06:58, Florian Rüchel
florian.ruechel@inexplicity.de wrote:
 Certificates for HS: I find this topic particularly interesting and have
 followed the discussion. The general concept seems like a great thing to
 achieve and it could actually outperform the regular SSL/CA infrastructure
 stuff as it could remove the need for CAs. Unfortunately, this seems
 something that is not extensive enough to warrant a whole thesis. If you
 guys think otherwise, please let me know.

I think there are some things here that might be large enough.  Specifically:
What is the best way to present an Extended Validation badge in Tor
Browser without requiring a CA signature.  Some ideas that have been
thrown around:
 - Have a .com leaf cert sign a .onion cert, change the green to
orange, and show the original domain name
 - Have some sort of Namecoin/Sovereign-Keys like structure (also
applicable to petnames)
 - User-configurable and managed favorites system in an extension that
petnames a Hidden Service to a name, for that user only

 Tor with mix features: Tor has the explicit goal of being a low-latency
 network. However, there are several protocols where high-latency would be
 acceptable. I liked the idea of high latency HSes
 (https://lists.torproject.org/pipermail/tor-dev/2014-November/007818.html).
 I'd like to know what you think about this idea being viable. It would have
 the advantage of being very flexible from just a theoretic evaluation down
 to a real implementation so I could adjust this to my time. But only if this
 is actually desired so it does not need to stay theoretic. I think it would
 be very interesting to evaluate whether this can improve or hurt anonymity
 of low-latency users, as well.

Lots of people love the idea of getting High-Latency inlaid in the Tor
network.  There is definitely interest here.  This sounds like more
than a 6 month thesis, but maybe if you bit off a chunk of it.


 This would be the bigger topics I have found on which I could see myself
 building a thesis. I also stumbled upon smaller research questions (e.g.
 whether running a bridge/relay is good, bad or doesn't make a difference for
 anonymity) but none of those warrant a full 6 month thesis so I discarded
 them for the moment.

Hm, maybe Can an attacker distinguish traffic leaving an exit node
from the following three profiles:
 - User on that machine doing interactive web browsing
 - User SSH-ed into that machine doing interactive web browsing
 - Person using Tor exiting through that relay

I suspect the answer is Yes, easily. but AFAIK it's never been
demonstrated, and there's an unofficial recommendation you see
repeated places that say Oh, run an exit relay so your traffic mixes
with it.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Specification for 'How to Safely Sign a statement with a .onion key'

2014-11-24 Thread Tom Ritter
Attached is a document written in the specification format for one
aspect of CA-signed .onion addresses - specifically a What is a safe
way to sign (or not sign) a statement using the .onion key  It
presents a couple options - I'd love to get feedback from folks on
which they prefer.

I recognize that no consensus or decision has been reached on whether
to encourage or guide CAs on issuing .onions. (Although I'm in
favor[0].)  Although this is obviously written skewed towards CAs, it
addresses a generic problem, and could be rewritten in that form.

Excerpting from the Introduction/Motivation:

   Several organizations, such as Facebook, (collectively,
   'applicants') have expressed a desire for an end-entity
   SSL certificate valid for their .onion Hidden Service
   address signed by a Certificate Authority (CA) present in
   browser trust stores.

   The existing Facebook .onion URL was issued by Digicert under
   a loophole in certificate issuance requirements. The Basic
   Requirements [0] is a document created by the Certificate
   Authority/Browser Forum that governs acceptable certificate
   issuing policies. Adherence to its policies is required for
   inclusion in many browsers' trust stores. .onion counts as
   an 'internal hostname' as it is not a recognized TLD by IANA.
   November 1, 2015 sets the deadline for when all internal server
   certificates must be revoked and no new certificates may be
   issued. Resolving the requirements and preferences for issuing
   .onion certificates before this date is extremely desirable, as
   it will determine if organizations such as Facebook will invest
   in the time and engineering effort needed to deploy Hidden
   Services.

   The full requirements for issuing .onion certificates is TBD,
   although recognition of .onion by IANA as a reserved domain is
   likely required.

   During discussions about the requirements for issuing for
   .onion one question that has arisen is what is a safe way to
   assert ownership of a .onion address and format an x509
   certificate with a .onion Subject Alternate Name (SAN).
   This document is designed to address some of those questions.

-tom

[0] https://lists.torproject.org/pipermail/tor-dev/2014-November/007786.html
Filename: XXX-recommendations-for-onion-certifiates.txt
Title: Recommendations for CA-signed .onion Certificates
Authors: Tom Ritter
Created: 23 November 2014
Target: n/a
Status: Draft

1. Intro and motivation

   Several organizations, such as Facebook, (collectively, 
   'applicants') have expressed a desire for an end-entity 
   SSL certificate valid for their .onion Hidden Service 
   address signed by a Certificate Authority (CA) present in 
   browser trust stores.

   Applicants want a CA-signed .onion address for several reasons,
   including:
- Architectural reasons, where the existing web server
  expects and requires SSL 
- Removing the existing 'Invalid Certificate' warning when 
  accessing a .onion URL over https
- Public attribution of ownership of the .onion address

   The existing Facebook .onion URL was issued by Digicert under
   a loophole in certificate issuance requirements. The Basic 
   Requirements [0] is a document created by the Certificate 
   Authority/Browser Forum that governs acceptable certificate 
   issuing policies. Adherence to its policies is required for 
   inclusion in many browsers' trust stores. .onion counts as
   an 'internal hostname' as it is not a recognized TLD by IANA.
   November 1, 2015 sets the deadline for when all internal server
   certificates must be revoked and no new certificates may be 
   issued. Resolving the requirements and preferences for issuing
   .onion certificates before this date is extremely desirable, as
   it will determine if organizations such as Facebook will invest
   in the time and engineering effort needed to deploy Hidden 
   Services.  

   The full requirements for issuing .onion certificates is TBD,
   although recognition of .onion by IANA as a reserved domain is
   likely required.

   During discussions about the requirements for issuing for 
   .onion one question that has arisen is what is a safe way to 
   assert ownership of a .onion address and format an x509 
   certificate with a .onion Subject Alternate Name (SAN).
   This document is designed to address some of those questions.

2. Ownership of a .onion

   A .onion address is an 80-bit truncation of the SHA-1 hash of 
   the Hidden Service's public identity key (which is RSA 1024).
   To assert that an entity has control of a .onion address, an 
   entity could do one of several things.

2.1 Well Known URI 

   RFC5785 [1] specifies a path prefix for well known locations. 
   A CA could require an applicant to post a specific value at
   a well-known URI.  Because of the nature of .onion addresses,
   this ensures that successful completion of this challenge is
   limited to:

   i.   The legitimate owner of the .onion domain

Re: [tor-dev] Of CA-signed certs and .onion URIs

2014-11-18 Thread Tom Ritter
On 18 November 2014 21:53, grarpamp grarp...@gmail.com wrote:
 On Tue, Nov 18, 2014 at 12:55 PM, George Kadianakis
 desnac...@riseup.net wrote:
 plans for any Tor modifications we want to do (for example, trusting
 self-signed certs signed by the HS identity key seem like a generally
 good idea).

 If the HS pubkey and the onion CN were both in the cert, and signed
 over by that same key all corresponding to the url the TBB is currently
 attempting to validate, that would seem fine to me. No interaction with
 the controller (which may not even be accessible [1]) needed to get the
 HS descriptor (pubkey). Security is limited to 80-bits, or the future wider
 proposal. It's also a TBB specific extension. All other browsers pointed
 at socks5 somewhere will still rightly fail, unless adopted upstream
 (which MSIE won't do) or via standards. Note that this is not 'turning off
 the warnings for all .onion', it's recognizing that attestation of the HS key
 is sufficient to show ownership of that service. Whereas under various
 attacks a traditional selfsigned cert is not.

If I've put a .onion url into the address bar in Tor Browser, and it
connects me - absent a bug in tor, 80-bit collision, or 1024-bit
factoring I know I'm talking to an endpoint on the other side that is
authoritative for the public key corresponding to the onion address.
At that point, they can tell me whatever they want, and I know I'm
still talking to the correct endpoint - like you said, the .onion
resolving and succeeding in connecting attested to the service.

So I'm not sure I understand the attacks you're talking about.  It's
true that if, on the relay hosting the HS, you forwarded it to another
machine and that connection was attacked (between your webserver and
your HS relay) - the connection would be insecure.  But I consider
that to be outside Tor. You had a responsibility to make your
application secure and you didn't, same as if you had SQL injection.

You mention All other browsers pointed at socks5 somewhere will still
rightly fail - that's still the case.  I'm not talking about putting
this .onion SSL bypass stuff into little-t tor, I'm talking about
making it a Tor Browser Extension - if that crossed our wires.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Of CA-signed certs and .onion URIs

2014-11-14 Thread Tom Ritter
There's been a spirited debate on irc, so I thought I would try and
capture my thoughts in long form. I think it's important to look at
the long-term goals rather than how to get there, so that's where I'm
going to start, and then at each item maybe talk a little bit about
how to get there.  So I think the Tor Project and Tor Browser should:

a) Eliminate self-signed certificate errors when browsing https:// on
an onion site
b) Consider how Mixed Content should interact with .onion browsing
c) Get .onion IANA reserved
d) Address the problems that Facebook is/was concerned about when
deploying a .onion
e) Consider how EV treatment could be used to improve poor .onion readability

(If you're not familiar with DV [Domain Validated] and EV [Extended
Validation] certificates and their UI differences, you should take a
peek. For example [0]. There are other subtleties and requirements on
EV certs like OCSP checking that removes the indicator, and the
forthcoming CT effort in Chrome, but that's mostly orthogonal.)


a) Self Signed Errors on .onion

A .onion specifies the key needed. As far as little-t tor is
concerned, it got you to the correct endpoint safely, so whatever SSL
certificate is presented could be considered valid.

However, if the Hidden Service is on box A and the webserver on box B
- you'd need to do some out-of-application tricks (like stunnel) to
prevent a MITM from attacking that connection.  So as Roger suggested,
perhaps requiring the SSL certificate to be signed by the .onion key
would be a reasonable choice.   But if you make that requirement, it
also implies that HTTP .onions are less secure than HTTPS .onions.
Which may or not be the case - you don't know.

I'm not religious about anything other than getting rid of the error:
I don't like that users are trained to click through security errors.

This is a weakly held opinion right now - but I think it's fair to
give DV treatment to http://.onion because it is, from little-t tor's
point of view, secure.  Following that conclusion, it is therefore
fair to accept self-signed certificates and _not_ require a
certificate for a https://.onion be signed by the .onion key.
(Because otherwise, we're saying that SSL  on .onion requires more
hoops to achieve security than HTTP on .onion, which isn't the case.)


b) Mixed Content on .onion

This is a can of worms I'm not going to open in this mail. But it's
there, and I think it's worth thinking about whether a .onion
requesting resources from http://.com or https://.com is acceptable.


c) Get .onion IANA reserved

I think this is fairly apparent in itself, and is in the works [1].
Not sure its status but I would be happy to lend time in whatever IETF
group/work is needed if it will help.


d) Address the problems that Facebook is/was concerned about when
deploying a .onion

There are reasons, technical and political, why Facebook went and got
a HTTPS cert for their .onion.  I've copied Alec so hopefully he'll
agree, refute, or add.  But from my perspective, if I were Facebook or
another large company like that:

i) I don't want to train my users to click through HTTPS warnings.
(Conversely, I like training my users to type https://mysite.com)
ii) I don't want to have to do the development and QA work to cut my
site over to be sometimes-HTTP if it's normally always HTTPS
iii) It would be convenient if I didn't have to do stunnel tricks to
encrypt the connection between my Hidden Service server and (e.g.)
load balancer, which is on another box
iv) I'd really like to get a green box showing my org name, and it's
even better that it'd be very difficult a phisher to get that

(iii) can contradict with (A) above of course. Because I came to the
conclusion of allowing invalid certificates, a MITM could attack
Facebook between the HS server and load balancer. I'm not sure there
is an elegant solution there. One would probably have to tunnel the
connection over a mutually authenticated stunnel connection to prevent
a MITM. But frankly, if we assume users are used to clicking through
self-signed certs and we want to start the process of training them
_not_ to, Facebook would have to do this now _anyway_. So... =/  I
guess documenting the crap out of this concern and providing examples
may be the best solution based off my mindset right now.

It's awesome that Facebook set up a Hidden Service.  I'd love to get a
lot more large orgs doing that.  We should reach out and figure out
what the blockers are, what's painful about it, and what we can do to
help.  I would love doing that, it would be awesome.  (And I'm not
afraid to NDA myself up if necessary, seeing as I'm under NDA with
half of the Bay Area anyway.)


e) Consider how EV treatment could be used to improve poor .onion readability

This is the trickiest one, and it overlaps the most with the question
of Should we encourage CAs to issue certificates?

EV 

Re: [tor-dev] Running a Separate Tor Network

2014-11-09 Thread Tom Ritter
On 22 October 2014 05:48, Roger Dingledine a...@mit.edu wrote:
 What I had to do was make one of my Directory Authorities an exit -
 this let the other nodes start building circuits through the
 authorities and upload descriptors.

 This part seems surprising to me -- directory authorities always publish
 their dirport whether they've found it reachable or not, and relays
 publish their descriptors directly to the dirport of each directory
 authority (not through the Tor network).

 So maybe there's a bug that you aren't describing, or maybe you are
 misunderstanding what you saw?

 See also https://trac.torproject.org/projects/tor/ticket/11973

 Another problem I ran into was that nodes couldn't conduct
 reachability tests when I had exits that were only using the Reduced
 Exit Policy - because it doesn't list the ORPort/DirPort!  (I was
 using nonstandard ports actually, but indeed the reduced exit policy
 does not include 9001 or 9030.)  Looking at the current consensus,
 there are 40 exits that exit to all ports, and 400-something exits
 that use the ReducedExitPolicy.  It seems like 9001 and 9030 should
 probably be added to that for reachability tests?

 The reachability tests for the ORPort involve extending the circuit to
 the ORPort -- which doesn't use an exit stream. So your relays should
 have been able to find themselves reachable, and published a descriptor,
 even with no exit relays in the network.

I think I traced down the source of the behavior I saw.  In brief, I
don't think reachability tests happen when there are no Exit nodes
because of a quirk in the bootstrapping process, where we never think
we have a minimum of directory information:

Nov 09 22:10:26.000 [notice] I learned some more directory
information, but not enough to build a circuit: We need more
descriptors: we have 5/5, and can only build 0% of likely paths. (We
have 100% of guards bw, 100% of midpoint bw, and 0% of exit bw.)

In long form: https://trac.torproject.org/projects/tor/ticket/13718




 Continuing in this thread, another problem I hit was that (I believe)
 nodes expect the 'Stable' flag when conducting certain reachability
 tests.  I'm not 100% certain - it may not prevent the relay from
 uploading a descriptor, but it seems like if no acceptable exit node
 is Stable - some reachability tests will be stuck.  I see these sorts
 of errors when there is no stable Exit node (the node generating the
 errors is in fact a Stable Exit though, so it clearly uploaded its
 descriptor and keeps running):

 In consider_testing_reachability() we call

 circuit_launch_by_extend_info(CIRCUIT_PURPOSE_TESTING, ei,
 CIRCLAUNCH_NEED_CAPACITY|CIRCLAUNCH_IS_INTERNAL);

 So the ORPort reachability test doesn't require the Stable flag.

You're right, reachability doesn't depend on Stable, sorry.



 I then added auth5 to a second DirAuth (auth2) as a trusted DirAuth.
 This results in a consensus for auth1, auth2, and auth5 - but auth3
 and auth4 did not sign it or produce a consensus.  Because the
 consensus was only signed by 2 of the 4 Auths (e.g., not a majority) -
 it was rejected by the relays (which did not list auth5).

 Right -- when you change the set of directory authorities, you need to
 get a sufficient clump of them to change all at once. This coordination
 has been a real hassle as we grow the number of directory authorities,
 and it's one of the main reasons we don't have more currently.

I'm going to try thinking more about this problem.



 This was fixed in git commit c03cfc05, and I think the fix went into
 Tor 0.2.4.13-alpha. What ancient version is your man page from?

/looks sheepish
I was using http://linux.die.net/man/1/tor because it's very quick to
pull up :-p


  And how there _is no_
 V3AuthInitialVotingInterval?  And that you can't modify these
 parameters without turning on TestingTorParameters (despite the fact
 that they will be used without TestingTorNetwork?)  And also,
 unrelated to the naming, these parameters are a fallback case for when
 we don't have a consensus, but if they're not kept in sync with
 V3AuthVotingInterval and their kin - the DirAuth can wind up
 completely out of sync and be unable to recover (except by luck).

 Yeah, don't mess with them unless you know what you're doing.

 As for the confusing names, you're totally right:
 https://trac.torproject.org/projects/tor/ticket/11967

Ahha.


  - The Directory Authority information is a bit out of date.
 Specifically, I was most confused by V1 vs V2 vs V3 Directories.  I am
 not sure if the actual network's DirAuths set V1AuthoritativeDirectory
 or V2AuthoritativeDirectory - but I eventually convinced myself that
 only V3AuthoritativeDirectory was needed.

 Correct. Can you submit a ticket to fix this, wherever you found it?
 Assuming it wasn't from your ancient man page that is? :)

It was.



  - The networkstatus-bridges file is not included in the tor man page

 Yep. Please file a ticket.


[tor-dev] Running a Separate Tor Network

2014-10-15 Thread Tom Ritter
Hi all,

Not content to let you have all the fun, I decided to run my own Tor network!

Kidding ;)  But the Directory Authorities, the crappy experiment
leading up to Black Hat, and the promise that one can recreate the Tor
Network in the event of some catastrophe interests me enough that I
decided to investigate it.  I'm aware of Chutney and Shadow, but I
wanted it to feel as authentic as possible, so I forwent those, and
just ran full-featured independent tor daemons.  I explicitly wanted
to avoid setting TestingTorNetwork.  I did have to edit a few other
parameters, but very few. [0]

I plan on doing a blog post, giving a HOWTO, but I thought I'd write
about my experience so far.  I've found a number of interesting issues
that arise in the bootstrapping of a non-TestingTorNetwork, mostly
around reachability testing.

-

One of the first things I ran into was a problem where I could not get
any routers to upload descriptors.  An OR checks itself to determine
reachability before uploading a descriptor by building a circuit -
bypassed with AssumeReachable or TestingTorNetwork.  This works fine
for Chutney and Shadow, as they reach into the OR and set
AssumeReachable.  But if the Tor Network were to be rebooted... most
nodes out there would _not_ have AssumeReachable, and they would not
be able to perform self-testing with a consensus consisting of just
Directory Authorities.  I think nodes left running would be okay, but
nodes restarted would be stuck in a startup loop.  I imagine what
would actually happen is Noisebridge and TorServers and a few other
close friends would set the flag, they would get into the consensus,
and then the rest of the network would start coming back...   (Or
possibly a few nodes could anticipate this problem ahead of time, and
set it now.)

What I had to do was make one of my Directory Authorities an exit -
this let the other nodes start building circuits through the
authorities and upload descriptors.  Maybe an OR should have logic
that if it has a valid consensus with no Exit nodes, it should assume
it's reachable and send a descriptor - and then let the Directory
Authorities perform reachability tests for whether or not to include
it?  From the POV of an intentional DoS - an OR doesn't have to obey
the reachability test of course, so no change there.  It could
potentially lead to an unintentional DoS where all several thousand
routers start slamming the DirAuths as soon as a usable-but-blank
consensus is found... but AFAIK routers probe for a consensus based on
semi-random timing anyway, so that may mitigate that?

-

Another problem I ran into was that nodes couldn't conduct
reachability tests when I had exits that were only using the Reduced
Exit Policy - because it doesn't list the ORPort/DirPort!  (I was
using nonstandard ports actually, but indeed the reduced exit policy
does not include 9001 or 9030.)  Looking at the current consensus,
there are 40 exits that exit to all ports, and 400-something exits
that use the ReducedExitPolicy.  It seems like 9001 and 9030 should
probably be added to that for reachability tests?

-

Continuing in this thread, another problem I hit was that (I believe)
nodes expect the 'Stable' flag when conducting certain reachability
tests.  I'm not 100% certain - it may not prevent the relay from
uploading a descriptor, but it seems like if no acceptable exit node
is Stable - some reachability tests will be stuck.  I see these sorts
of errors when there is no stable Exit node (the node generating the
errors is in fact a Stable Exit though, so it clearly uploaded its
descriptor and keeps running):
Oct 13 14:49:46.000 [warn] Making tunnel to dirserver failed.
Oct 13 14:49:46.000 [warn] We just marked ourself as down. Are your
external addresses reachable?
Oct 13 14:50:47.000 [notice] No Tor server allows exit to
[scrubbed]:25030. Rejecting.

Since ORPort/DirPort are not in the ReducedExitPolicy, this (may?)
restrict the number of nodes available for conducting a reachability
test.  I think the Stable flag is calculated off the average age of
the network though, so the only time when this would cause a big
problem is when the network (DirAuths) have been running for a little
bit and a full exit node hasn't been added - it would have to wait
longer for the full exit node to get the Stable flag.

-

Getting a BWAuth running was... nontrivial.  Some of the things I found:
 - SQLAlchemy 0.7.x is no longer supported.  0.9.x does not work, nor
0.8.x.  0.7.10 does.
 - Several quasi-bugs with the code/documentation (the earliest three
commits here: https://github.com/tomrittervg/torflow/commits/tomedits)
 - The bandwidth scanner actively breaks in certain situations of
divide-by-zero 
(https://github.com/tomrittervg/torflow/commit/053dfc17c0411dac0f6c4e43954f90b1338a3967)
 - The scanner will be perpetually stuck if you're sitting on the same
/16 and you don't perform the equivalent of EnforceDistinctSubnets 0
[1]

Ultimately, while I 

Re: [tor-dev] Scaling tor for a global population

2014-09-28 Thread Tom Ritter
On 28 September 2014 07:00, Sebastian Hahn sebast...@torproject.org wrote:
 This analysis doesn't make much sense, I'm afraid. We use compression
 on the wire, so repeating flags as human-readable strings has a much
 lower overhead than you estimate, for example. Re-doing your estimates
 with actually compressed consensuses might make sense, but probably
 you'll see a lot less value.

All of those numbers were after compressing the consensus document
using xz, which is the best compression method I know.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Scaling tor for a global population

2014-09-27 Thread Tom Ritter
On 26 September 2014 22:28, Mike Perry mikepe...@torproject.org wrote:
 That's basically what I'm arguing: We can increase the capacity of the
 network by reducing directory waste but adding more high capacity relays
 to replace this waste, causing the overall directory to be the same
 size, but with more capacity.

I'm sure that diffs will make a huge difference, but if you're
focusing on the directory documents why not also change the consensus
and related document formats to be something more efficient than ASCII
text?  Taking the latest consensus and doing some rough estimates, I
found the following:

Original consensus, xz-ed: 407K
Change flags to uint16: ~399K
+Removing names: 363K
+Compressing IPv6 to 16Bytes + 4 Bytes - 360K
+Compressing IPv4 to 4 Bytes + 4Bytes + 4bytes - 315K
+Compressing the Datetime to 4 bytes - 291K
+Compressing the Version string to 4bytes - 288K
+Replacing reject 1-65K to a single byte - 287K
+Replacing Bandwidth=# with a 4 byte - 273K

These numbers are optimistic - you won't see quite this much gain, but
if I'm understanding you correctly that the consensus is painful, it
seems like you could save at least 50K-70K out of 400K with relatively
straightforward changes.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Call for a big fast bridge (to be the meek backend)

2014-09-17 Thread Tom Ritter
On 15 September 2014 21:12, David Fifield da...@bamsoftware.com wrote:
 Since meek works differently than obfs3, for example, it doesn't help us
 to have hundreds of medium-fast bridges. We need one (or maybe two or
 three) big fat fast relays, because all the traffic that is bounced
 through App Engine or Amazon will be pointed at it.

A horrible idea Isis and I came up with was standing up two or more
tor servers with the same keys, on an anycast-ed IP address. I'm not
sure how the meek backend works, but if it's not set up already to
support round-robining, you would likely be able to trick it into
doing some by duplicating keys and doing DNS round-robining or other
networking-layer tricks.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Guard nodes and network down events

2014-08-14 Thread Tom Ritter
On 13 August 2014 07:47, George Kadianakis desnac...@riseup.net wrote:
 The fundamental issue here is that Tor does not have a primitive that
 detects whether the network is up or down, since any such primitive
 stands out to a network attacker [3].

I'm not certain this is true.  Windows and Mac OS detect whether or
not there is a Captive Portal/Internet connection.  While one can
argue this is bad practice, piggybacking on a detection mechanism used
by default in widely deployed OS's seems like it would not stand out.

Windows has IsInternetConnected [0] which uses NCSI[1].

I know less about Mac, but there is SCNetowrkReachability [2].
Apparently the (undocumented) way that Apple uses to detect captive
portals is [3].

It's not very clean to emulate a request instead of using an API, if
it came down to it.  But while it may seem dangerous to emulate a
request that can change in an OS patch... the reality of it is that as
long as you pay attention to the patches, you'd be able to deploy a
fix long before a non-negligible portion of people patched anyway.

-tom

[0] 
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366143(v=vs.85).aspx
[1] http://blog.superuser.com/2011/05/16/windows-7-network-awareness/
[2] 
https://developer.apple.com/library/mac/documentation/SystemConfiguration/Reference/SCNetworkReachabilityRef/Reference/reference.html
[3] 
http://blog.erratasec.com/2010/09/apples-secret-wispr-request.html#.U-y2KYBdWaQ
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden service policies

2014-07-20 Thread Tom Ritter
One of my first concerns would be that this would build in a very easy
way for a government (probably the US government) to compel Tor to add
in a line of code that says If it's this hidden service key, block
access.

After all - it's a stretch to say You must modify your software to
support blocking things[0] but it's not so much a stretch to say You
already have the code written to block access to things, block access
to this thing.

-tom

[0] The OnStar legal case notwithstanding
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] 7 Dir Servers Dropping - Doctor Error?

2014-07-06 Thread Tom Ritter
On 6 July 2014 18:59, doctor role account
doc...@cappadocicum.torproject.org wrote:
 ERROR: Unable to retrieve the consensus from maatuska 
 (http://171.25.193.9:443/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from tor26 
 (http://86.59.21.38:80/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from urras 
 (http://208.83.223.34:443/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from gabelmoo 
 (http://212.112.245.170:80/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from moria1 
 (http://128.31.0.39:9131/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from dannenberg 
 (http://193.23.244.244:80/tor/status-vote/current/consensus): timed out
 ERROR: Unable to retrieve the consensus from Faravahar 
 (http://154.35.32.5:80/tor/status-vote/current/consensus): timed out

I've been keeping an eye on consensus health, it seems to be pretty
reliable, but this is an obvious outlier.  I assume it was a doctor
error of some sort?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Geolocating exit nodes.

2014-06-18 Thread Tom Ritter
If your goal is to choose an exit specially to minimize risk of it being
run by a malicious actor, it seems choosing exits run by orgs you trust
would be better than choosing based on where someone is hosting a server.

But yes, you can choose exits by country.  I'm not saying it's a good idea
or that hard choosing exits in any fashion is good for the network. (It's
not.)

http://www.2byts.com/2012/03/09/how-to-configure-the-exit-country-on-tor-network/
http://tor.stackexchange.com/questions/733/can-i-exit-from-a-specific-country-or-node

-tom
On Jun 18, 2014 1:41 AM, JP Wulf wulf...@gmail.com wrote:


 So Griffin Boyce is canvasing for some input to improve Tor, specifially
 for Journalists.
 https://twitter.com/abditum/status/479052228138119168

 1. It is known that various actors are trying to compromise Tor comms by
 establishing
 own exit nodes. With enough nodes, they can break Tor (see slides).

 2. Idea: Is it possible to allow the end user to determine the
 geo-location (with various degrees of fine tuning from hemisphere, through
 continental, to top country domain to regional?
 (I have NFI about the inner workings of TOR protocol and new work on it)

 For example. Say a journalist in Russia is using Tor, s/he declares in
 their tor client, that they only want to use exit nodes in South America
 and Australia. Thus minimising the chance the nodes are owned.

 This geolocation could perhaps be used to validate the integrity of the
 nodes (how I dont know, maybe by establishing TOR honeypots that can only
 be compromised through traffic through a compromised (owned) exit node).

 Risk:
 This is a rats nest, because if implemented incorrectly it may allow
 hostile actors to direct exit nodes to those that are owned.

 Thanks for reading my fiction. Maybe its useful in the light of what
 Griffin is asking about.


 --
 JP Wulf
 Problem Solution Engineering
 http://nomeonastiq.com/

 ___
 tor-dev mailing list
 tor-dev@lists.torproject.org
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] A few questions about defenses against particular attacks

2014-03-14 Thread Tom Ritter
Hi Yuhao!

Some of the things Tor does (e.g. public list of nodes) is because
it's relatively easy to attack if you try and not do it that way.  For
example:

On 13 March 2014 15:08, Yuhao Dong yd2d...@uwaterloo.ca wrote:
   - No public list of all node addresses; this makes determining
   whether certain traffic is Oor traffic much harder. More at the next
   bulletpoint
...
- Blanket blacklist attacks by censors. Censors can poll the directory
and block all ordinary Tor nodes. (obfsproxy) bridges are a workaround.
   - Oor's directory maintains a *graph* of all nodes. Each node knows
   the public keys of all the other nodes, but each node only knows the
   addresses of *adjacent* nodes.


An attacker could enumerate all exit nodes by simply building lots of
circuits and connecting to a website they control, noting the origin
IPs.

Similarly, I'm assuming you're allowing users to run nodes, in which
case I can stand up node after node (or keep generating new node
identities) and record the addresses of the nodes I am connected to.

I'm also assuming there is some central directory in the middle that
nodes connect to and provide their identity key and address?  And then
when you start up a node, it will give you your 'neighboring' nodes?

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HTTPS Server Impersonation

2013-09-30 Thread Tom Ritter
On 30 September 2013 07:01, Ian Goldberg i...@cs.uwaterloo.ca wrote:
 On Mon, Sep 30, 2013 at 01:03:14AM -0700, Rohit wrote:
 This should satisfy most goals.
 - A passive attacker wouldn't be able to distinguish between HTTPS-HTTPS 
 traffic and Tor-Bridge. (Both use TLS)

 This seems false to me; it's not too hard to distinguish Tor-over-TLS
 from HTTP-over-TLS, right?


Difficulty is relative. From an academic standpoint - no it's not too
difficult.  From an engineering standpoint, I think it's difficult
enough to be worth pursuing.

Brandon Wiley tested bypassing protocol assignment on a lot of
real-world DPI hardware[0]. It's extremely rare for any of them to
make a protocol assignment on anything other than the first packet of
a stream.  It's fast, it's not stateful, it takes less memory, it
works in 99.9% of cases. For the few remaining cases, it's extremely,
extremely rare (if not 'never') for a device to do statistical
analysis to classify a protocol.

So while it's _possible_ for someone to detect the difference, the
amount of engineering that's required in a deployment situation is
much greater than the amount needed to build a POC. And even if
someone does build a way to detect it, it will be a statistical
classification (probably), so by altering Tor's behavior we could
break their classifier (for example, by using AlSabah and your's
traffic splitting approach, but applied to mimic normal browser
resource loads). And, I think, the goal isn't to achieve a 100%
bypass, but rather to raise their false positive rate high enough that
it's undeployable without extreme backlash[1].

-tom

[0] https://github.com/blanu/Dust
[1] I'd love to get a better handle on this, but I've heard that when
China blocked Github, the outrage was enough to get it unblocked in
short enough order.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Idea regarding active probing and follow-up of SSL connections to TOR bridges

2013-07-27 Thread Tom Ritter
On 27 July 2013 10:17, Lag Inimaineb laginimai...@gmail.com wrote:
 As for suggestions such as SWEET, FreeWave, etc. - those would require
 changes to the TOR clients (right?), which makes them probably less easy to
 use, unless they are merged into the TOR mainline. Same goes for
 ScambleSuit, since the shared secret much somehow be delivered out-of-band,
 which is not always an easy feat to accomplish.

Those are not the biggest hurdles.  Distributing a secret along with
bridge IPs is not too difficult, BridgeDB has this capability built
in.  Likewise, changes to TBB are relatively easy compared to the
difficulty of having a major social media site install software that
splits Tor bridge traffic off from their legit HTTP traffic.  That
would require them being extremely, _extremely_ confident in the
scalability, performance, and security of said code.

That said - I've had this same idea myself.  I tend to categorize
censorship into 4 buckets:
1) Source-Based. You are not allowed online.
2) Destination-Based - you can't talk to this host, this IP, this port
3) Byte-Matching - You can't search for this term, you can't speak this protocol
4) Pattern-Based - You can't talk SSL in a manner where you're
uploading the same amount as you're downloading, or you can't use SSH
in a way that looks like you're transferring files.

We've seen large deployments of Destination-Based and Byte-Matching
(and augmented w/ follow-up scans to have a higher confidence).

Github was blocked in China briefly, and allegedly the Chinese people
protested and the ban was lifted.[0]  This implies, to me, that
certain sites are too politically important to be blocked.  If we
enlisted their help in this model we would have essentially
unblockable bridges.  It's a win-win: Either the gov't doesn't block
the site, and people can use the bridges OR The gov't does block the
site, piss people off, and hopefully begins the crumble.  It's
probably not a popular opinion, but the more the government makes a
people suffer... the more likely they are to overthrow it.  (And not
having github is a lot better suffering than being thrown in the
gulag.)

-tom

[0] 
http://www.h-online.com/open/news/item/GitHub-blocked-in-China-Update-1789114.html
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Discussion on the crypto migration plan of the identity keys of Hidden Services

2013-06-07 Thread Tom Ritter
On Jun 6, 2013 9:56 AM, Matthew Finkel matthew.fin...@gmail.com wrote:
 I suppose the followup question to this is is there really a need for
 backwards compatability n years in the future? I completely understand
 the usefulness of this feature but I'm unsure if maintaining this
 ability is really necessary. The other issue arises due to the fact that
 the HSDir are not fixed, so caching this mapping will be non-trivial.

 Also, I may not be groking this idea, but which entity is signing the
 timestamp: and received back a signature of the data and a timestamp.?
 is it the HS or the HSDir? And is this signature also created using a 1024
 bit key?

The HS proves key ownership, and receives the time-stamped assertion
Key1024 and Key2048 were proven to be owned by the same entity on June 6,
2013.  They will provide that assertion to clients contacting them
post-Flag Day. The assertion can be signed with whatever key you like, ECC,
2048, 4096,etc.

But who is the timestamper? I originally imagined the Directory
Authorities, but they don't want to have records of all HS.  I wasn't as
familiar with HS workings when I wrote that.  I don't think HSDir's are
long lived enough, or trustworthy enough, to be time stampers.

So now I'm not sure.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Building better pluggable transports (Google Summer of Code)

2013-05-28 Thread Tom Ritter
I have another idea.  (Not another in the sense of do this instead, but
another in the sense of maybe do this additionally).

Can a country block SSH?  Surely state-sponsored network operations take
place over SSH, so I suspect a country cannot block it quickly, easily, and
without internal retaliation from it's legitimate users.  Bureaucracy.

What if one of the obfuscated proxy protocols was SSH?  And not Let's make
Tor look like SSH but Let's just put Tor inside of SSH.  Have
obfsssh-client use libssh to connect to a obfsssh bridge.  When you get the
Bridge IP, you also get a SSH private key to connect to the SSH daemon.

On the serverside, obfsssh-server makes sure SSH is listening on port
whatever, and connected to Tor's local port 5 (using the -W option).
When you login, the client just talks Tor into the SSH stream, and on the
server it's passed right into Tor.

This also very neatly absolves the issue of What if a censor tries probing
a obfs-bridge to see if it's a Tor bridge?  The server is a perfectly
legitimate SSH daemon showing a perfectly legitimate key-based login
prompt. Unless the censor has the private key, they can't login.  The key
is distributed next to the bridge IP - it's not intended to be anymore
secret than the IP.

I think the advantages of this are:
1) While it does require some development effort - it's not nearly as much
as other proposals.  Accordingly it's lightweight.  It's easy to deploy and
experiment with and just see if or how it works.
2) It allows us to test a theory: that if we can identify a particular
service or protocol that a censored country's government relies on, we can
disguise Tor as it, and make it painful or difficult for them to block.

I brainstormed this with Daniel Kahn Gilmore, Phillip Winter, Isis, and a
few others in Hong Kong.  The notes we took are below:

Client Side
libssh connection using a private key that is distributed w/ the
bridge IP
connect to the server

obs normally listens on 1337
Two options:
1) ssh -L 1337 and obs doesn't listen on anything
2) obs listens on 1337 and takes the data and passes it to
ssh -W

ssh -W keeps the same mechanisms that obsproxy uses so it's
preferable


Server Side
tor runs on local port 5
ssh -W tells ssh on the other side to connect to the tor port
obsproxy does not touch data on the server side

obsproxy does not open a port
it sits there making sure:
 - tor is running
 - tor is configured right
 - ssh is listening on the correct port
 - ssh is configured right
- this includes checking that MaxSessions is appropriately
sane
 - users can auth to ssh using the private key that is expected


Open questions:
Should we use ssh -L or ssh -W on the client side? (Probably -W)
Is the -W option (the control messages) in the clear or in the
encrypted transport
If it's in the clear, this can be fingerprintable

Libraries
 - paramiko does SSH server and SSH client, could use it

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Building better pluggable transports (Google Summer of Code)

2013-05-28 Thread Tom Ritter
On 28 May 2013 14:51, adrelanos adrela...@riseup.net wrote:

 How good are SSH connections with hiding what's inside?

 Website fingerprinting has demonstrated, that SSH connections may hide
 communication contents, but which website was visited, could be guessed
 with a fairly good results.

 Tor isn't a website, but if SSH leaks which website has been visited
 even when using a SSH tunnel, will it also leak the fact, that someone
 is using Tor through a SSH tunnel?


I think that if we make the adversary upgrade from probing and byte
matching (e.g. look for specific ciphersuites) to statistical protocol
modeling, especially with a small time investment on our part, we have won
a battle.  Development effort isn't free.

You probably can detect Tor traffic inside of SSH with some probability X
after some amount of traffic Y.  But what X, what Y, and how much effort on
behalf of the adversary will it take?  I don't know, but I do think we
should work to move the fight beyond something as simple as byte matching.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization

2013-05-23 Thread Tom Ritter
RPW's, et al's paper was made public today, and demonstrates several
practical attacks on Hidden Services.
http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf

I was wondering if there were any private trac tickets, discussions,
or development plans about this that might be also be made public.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Discussion on the crypto migration plan of the identity keys of Hidden Services

2013-05-19 Thread Tom Ritter
On 17 May 2013 09:23, George Kadianakis desnac...@riseup.net wrote:

 There are basically two ways to do this:


A third comes to mind, somewhat similar to Mike's.

If we believe that 1024 RSA is not broken *now* (or at the very least, if
it is broken it's too valuable to waste on breaking Tor's Hidden
Services...) but that it certainly will be broken in the future - then I
can't think of any mechanism that would allow a future system that keeps
1024 bit key-based addresses to be secure...

Without introducing a trusted third party.  Imagine if a Hidden Service
today were to generate a new identity key, 2048 (or 4096 or whatever) and
submit this new key, and its current key to a Directory Server, signed by
the 1024 bit key, and received back a signature of the data and a timestamp.

Now, n years down the road when 1024 bit is broken... but 2048 is not - a
user enters the 1024 bit address, it goes through all the hoops and
connects to the Hidden Service where the HS provides the 2048 bit key and
the signed timestamp.  The client trusts that the mapping between the
broken 1024 and secure 2048 keys is valid because it trusts the directory
authorities to only timestamp such mappings accurately, and the timestamp
is in 2012 - before the we're saying 1024 is broken now, don't trust
timestamps after this date flag day.

This isn't about petnames, and from an engineering perspective it's
probably much more work than any other system.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Launcher settings UI feedback request

2013-05-03 Thread Tom Ritter
Sweet!  However I think this Wizard is a super-technical version of
something that should be much simpler if we intend to be targeting
non-technical users.

Feedback:
http://trial.pearlcrescent.com/tor/torlauncher/2013-05-03/SetupWizard/screen1-proxyYesNo.png

Question 1 (this is literally the first thing a user will see when
launching, right?) is a super-technical question.  How is a user
supposed to know if they're using a proxy?  If we're targeting users
who barely know what a browser is, I don't think these instructions
are good enough.  But any instructions would be wrong IMO.  Surely
there's some way to query this information from the system, no?  On
linux, looking at environment variables; on Windows there are system
functions.

http://trial.pearlcrescent.com/tor/torlauncher/2013-05-03/SetupWizard/screen3-firewallYesNo.png

What percentage of users are a) affected by this question and b)
understand it?  I suspect it's very, very low, and therefore not worth
dedicating a screen to.  Every additional screen is a monumental
amount of pain for a non-technical user to read, try to think through,
and then ultimately guess at hoping they get something right.  If
something, anything, doesn't work - they will think back through all
the screens they guessed at, and give up figuring it's way too hard to
figure out.

What's the behavior of firewalls failing?  Some drop packets (timeout)
and others reject, right?  If a connection on port 6-thousand-whatever
hits one of those cases, retry on 80 or 443.  Put up a cool animated
gif that makes the user think their computer is fighting hard to get
them online.  Bonus points for having it actually represent the packet
getting lost or being rejected.

http://trial.pearlcrescent.com/tor/torlauncher/2013-05-03/SetupWizard/screen5-bridges.png

I don't think this represents sufficiently that this is *Optional*.
That (most) users will not need to fill this in.



I think we should think about the security and privacy implications of
an approach that can be referred to technically as try a bunch of
shit.

1) If a user wants to use a proxy (e.g. an SSH tunnel) and we don't
abide by that, they will leak network traffic, this is bad.  It's also
probably reasonably common for technical folks.
2) If a user must use a proxy - trying a non-proxied connection
doesn't really matter, it'll just be dropped. It's very unlikely this
will cause an alert or raise suspicions, especially when the net admin
has created a chokepoint (the proxy) where she can do all the logging
and analytics she wants.. (Anyone disagree?)
3) If a user must have their traffic exit on certain ports, and we try
a different port - the firewall will drop it, and again, unlikely to
cause an alert or raise suspicions.
4) If a user *wants* to have their traffic exit on a port, and we
don't abide by it, that's bad. But I don't think this is super common.
 Well, maybe it is, maybe people prefer port 443.

So I think the best course of action would be to try and handle
everything except to bridge screen automatically, with perhaps some
[Advanced] button in the bottom left that these settings are hidden
behind.  Get the proxy settings automagically, and try some port
probing.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [OONI] Designing the OONI Backend (OONIB). RESTful API vs rsynch

2012-07-15 Thread Tom Ritter
 Contra:
 * No support for deltas (we can use rsych protocol over HTTP if we
 really need this).

It's a little hackish, but I believe there is a 'standard' way to do
this in HTTP also.  A client issues a GET (or PUT) request to a
resource, and recieves an Etag that identifies this version of the
object.  The client then issues a PATCH Request to update the object,
sending the Etag, and either structured XMLor JSON with the fields to
replace, or binary data with a Range header indicating where in the
object to replace.

If the Etag the client sent is the object stored on the server, the
PATCH succeeds and overwrites the data. If the Etag does not match,
the client is out of date and must issue a GET, resolve differences,
and then the PATCH.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 203: Avoiding censorship by impersonating an HTTPS server

2012-07-11 Thread Tom Ritter
On 11 July 2012 14:43, Jens Kubieziel maill...@kubieziel.de wrote:
 * Nick Mathewson schrieb am 2012-06-26 um 00:23 Uhr:
 Side note: What to put on the webserver?

To credibly pretend not to be ourselves, we must pretend to be
something else in particular -- and something not easily identifiable
or inherently worthless.  We should not, for example, have all

   We could also present some page which looks like a valid login page or
   a fresh installation (Apache, Mediawiki or something other popular).
   Another similar idea is it to deliver some error page, like a blank
   page with a MySQL-, PHP-, Tomcat or any other error message.

Or perhaps a 401 Authorization Required message, with a randomly
generated realm/name.  I think a lot of things would break if a censor
blocked all such prompts.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev