[tor-dev] Circuit times

2017-04-03 Thread grarpamp
Anything going to blow up if set anywhere from 1k to 1M?
CBT_NCIRCUITS_TO_OBSERVE
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Prop279 and DNS

2017-04-03 Thread Jesse V
On 04/03/2017 05:01 PM, Jeremy Rand wrote:
> Maybe this topic has already been brought up, but in case it hasn't,
> I'll do so.  I notice that Prop279 (onion naming API) defines its own
> API rather than using DNS.  I guess that this is because of security
> concerns about the centralization of the DNS.

Hi Jeremy,

I believe that the general idea with prop279 is simply to introduce an
API for resolving pseudo-TLDs before they were sent through the Tor
network. How that is done is entirely dependent on the naming system.

For example, if a user typed in example.bit into a Namecoin-enabled Tor
browser, the software could then perform your proposed DNS lookup and
rewrite the request before turning it over to the tor binary. In my
case, my OnioNS software rewrites .tor to .onion, since the tor binary
knows how to handle .onion. At the moment, this is a bit hacky because
the software has connect with tor's control port, manually review and
process each lookup, rewrite the the request, and then tell tor to
connect it with a circuit. Prop 279 is designed to make this much easier
and avoid hacky solutions.

-- 
Jesse Victors
Developer of the Onion Name System



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread dawuud


It's worth noting that controllers able to run SETCONF can ask the tor
process to execute arbitrary programs:

man torrc | grep exec

So if you want a controller to have any less privileges than the tor
daemon does, you need a control port filter for SETCONF at the very
least.

Without a control port filter, what is the threat model of the
ControlSocketsGroupWritable and CookieAuthFileGroupReadable options?

Maybe the torrc documentation for those options should recommend using
one?


On Mon, Apr 03, 2017 at 02:41:19PM -0400, Nick Mathewson wrote:
> Hi!
> 
> As you may know, the Tor control port assumes that if you can
> authenticate to it, you are completely trusted with respect to the Tor
> instance you have authenticated to.  But there are a few programs and
> tools that filter access to the Tor control port, in an attempt to
> provide more restricted access.
> 
> When I've been asked to think about including such a feature in Tor in
> the past, I've pointed out that while filtering commands is fairly
> easy, defining a safe subset of the Tor control protocol is not.  The
> problem is that many subsets of the control port protocol are
> sufficient for a hostile application to deanonymize users in
> surprising ways.
> 
> But I could be wrong!  Maybe there are subsets that are safer than others.
> 
> Let me try to illustrate. I'll be looking at a few filter sets for example.
> =
> Filters from https://github.com/subgraph/roflcoptor/filters :
> 
> 1. gnome-shell.json
> 
> This filter allows "SIGNAL NEWNYM", which can potentially be used to
> deanonymize a user who is on a single site for a long time by causing
> that user to rebuild new circuits with a given timing pattern.
> 
> 2. onioncircuits.json
> 
> Allows "GETINFO circuit-status" and "GETINFO stream-status", which
> expose to the application a complete list of where the user is
> visiting and how they are getting there.
> 
> 3. onionshare-gui.json
> 
> Allows "SETEVENTS HS_DESC", which is exposes to the application every
> hidden service which the user is visiting.
> 
> 4. ricochet.json
> 
> Allows "SETEVENTS HS_DESC", for which see "onionshare-gui" above.
> 
> 5. tbb.json
> 
> Allows "SETEVENTS STREAM" and "GETINFO circuit-status", for which see
> "onioncircuits" above.
> 
> =
> Filters from 
> https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d
> :
> 
> 1. onioncircuits.yml
> 
> See onioncircuits.json above; it allows the same GETINFO stuff.
> 
> 2. onionshare.yml
> 
> As above, appears to allow HS_DESC events.  It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.
> 
> 3. tor-browser.yml
> 
> As "tbb.json" above.
> 
> 4. tor-launcher.yml
> 
> Allows setconf of bridges, which allows the app to pick a hostile
> bridge on purpose.  Similar issues with Socks*Proxy.  The app can also
> use ReachableAddresses to restrict guards on the .
> 
> Allows SAVECONF, which lets the application make the above changes
> permanent (for as long as the torrc file is persisted)
> =
> 
> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.
> 
>   * Many restrictive filters block SETCONF and SAVECONF.  These two
> changes together should be enough to make sure that a hostile
> application can only deanonymize _current_ traffic, not future Tor
> traffic. Is that the threat model?  It's coherent, at least.
> 
>   * Some applications that care about their own onion services
> inadvertantly find themselves informed about everyone else's onion
> services.  I wonder if there's a way around that?
> 
>   * The NEWNYM-based side-channel above is a little scary.
> 
> 
> And where do we go forward from here?
> 
> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.
> 
> Here are a few _possible_ models we could think about, but I'd like to
> hear from app developers and filter authors and distributors more
> about what they think:
> 
>  A. Completely trusted controller.  (What we have now)
> 
>  B. Controller is untrusted, but is blocked from exfiltrating information.
> B.1. Controller can't connect to the network at all.
> B.2. Controller can't connect to the network except over tor.
> 
>  C. Controller is trusted wrt all current private information, but
> future 

[tor-dev] Prop279 and DNS

2017-04-03 Thread Jeremy Rand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hello!

Maybe this topic has already been brought up, but in case it hasn't,
I'll do so.  I notice that Prop279 (onion naming API) defines its own
API rather than using DNS.  I guess that this is because of security
concerns about the centralization of the DNS.

However, in case you're unaware, Namecoin is designed to interoperate
with DNS.  Let's say that, hypothetically, Tor defined a DNS-based
naming system for onion services, where "_tor.example.com" had a TXT
record that was verified with DNSSEC in order to make Tor direct
"example.com" to whatever that TXT record had.  If this were done,
Namecoin would be able to produce the necessary TXT record and DNSSEC
signatures, via the standard DNS protocol, using an authoritative
nameserver that runs on localhost.  (The DNSSEC keys used would be
unique per user, generated on installation.)  Indeed, this is how
we're planning to interoperate with non-proxy-based Internet
applications.

My guess is that it would be a lot less work on Namecoin's end if such
a system were used with Tor rather than a separate naming API.  It's
unclear to me how this would affect other naming systems such as GNS
(does GNS interoperate with clients that use DNS?), and it's also
unclear to me whether this would produce extra work for the Tor
developers (maybe DNS adds extra attack surface that would need to be
mitigated somehow, or maybe there would be complexity in implementing
stream isolation?).

Anyway, just figured I'd bring up the topic so that everyone's on the
same page regarding figuring out whether it's a good idea.

Cheers,
- -- 
- -Jeremy Rand
Lead Application Engineer at Namecoin
Mobile email: jeremyrandmob...@airmail.cc
Mobile PGP: 2158 0643 C13B B40F B0FD 5854 B007 A32D AB44 3D9C
Send non-security-critical things to my Mobile with PGP.
Please don't send me unencrypted messages.
My business email jer...@veclabs.net is having technical issues at the
moment.
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJY4rf2AAoJELPy0WV4bWVwviQP+wSz9/L8czef+L+viSIIrrtv
BOp32fysFWw1HijQ/42IoELPhkkzsHjek4IuW6Hn3VHGYs9vJ+rQ9aOcCMGNGD/f
f7ktcw3upH/UHiFPp2S0LeNqaoup8qvUQxG/AeP5R20gD/660ZXuIVl4uOaOu5HJ
IaghO9ZpzSF695H97hf7bz3H3Wrmch8tjC+FZ+SwWdgqGa4ijjZbTvkypcPEZ6YI
YQ22PmoQQWQBbe9JLujLa46PwRWU+UKsppmQYi7dY9K7aO7/J9eKQnOLkUWtdKrN
WjtJMV+V4oL/g4IiJrPs5n82pGSvpFi/dMrakoGq2w+v1dJolz/lSGUj7+sWVQZl
iqoq6c+l7MjKNynmj/Yn8IquhhwRmVAj4sjV+2jUeVmAf/tHDCBsDYvIDcDeIblu
j6y9e7ePTlMTpuxbZ7OKJjsWgGF5+yumWHPtJYs9uBoATeYDM6+Gxm73rDZxRVCl
+KGN1jMuREA9N1ZiWuK/ueeeZWGHii4L4UWvdK0qriSvc0HxaQeCGlovEDfO8btO
ZDfq9P6USEZywqFyzjzvOUwxnhihwNMdFiSt0RfxLuX34H6POvFYHhw85ESlliY8
0RPjHW6GZywNuOgpYDu9kPS6HPFhXUtok708Jmc926ctX2TT0CJlK6Fl3R2kZGCa
nOLHLSYVmkehj6u3RdBf
=Hz3g
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread Yawning Angel
For what it's worth, since there's a filter that's shipped and
nominally supported "officially"...

On Mon, 3 Apr 2017 14:41:19 -0400
Nick Mathewson  wrote:
> But I could be wrong!  Maybe there are subsets that are safer than
> others.

https://gitweb.torproject.org/tor-browser/sandboxed-tor-browser.git/tree/src/cmd/sandboxed-tor-browser/internal/tor

The threat model I used when writing it was, "firefox is probably owned
by the CIA/NSA/FBI/FSB/DGSE/AVID/GCHQ/BND/Illuminati/Reptilians, the
filter itself is trusted".  There's a feature vs annonymity tradeoff,
so it's up to the user to enable the circuit display if they want
firefox to have visibility into certain things.

Allowed (Passed through to the tor daemon):

 * `SIGNAL NEWNYM`.  If both `addressmap_clear_transient();`
   and `rend_client_purge_state();` aren't important then it can
   disallow the call, because it rewrites the SOCKS isolation for all
   connections to the SOCKSPort.

   At one point this was entirely synthetic and not propagated.  It's
   only a huge problem if people are not using the containerized tor
   instance.

   It's worth noting that even if I change the behavior to just change
   the SOCKS auth, a misbehaving firefox can still force new circuits
   for itself.

   The sandbox code could pop up a modal dialog box asking if the user
   really wants to "New Identity" or "New Tor Circuit for this Site",
   so that "scary" behavior requires manual user intervention (since
   torbutton's confirmation is probably subverted and not to be
   trusted).

 * (Optional) `GETCONF BRIDGE`.  The Tor Browser circuit display uses
   this to filter out Bridges from the display.  Since the circuit
   display is optional, this only happens if the user explicitly
   decides that they want the circuit display.

 * (Optional) `GETINFO ns/id/`.  Required for the circuit display.
   Mostly harmless.

 * (Optional) `GETINFO ip-to-country/`.  Required for the circuit
   display.  Harmless.  Could be handled by the filter.

Synthetic (Responses generated by the filter):

 * `PROTOCOLINFO`.  Not used by Tor Browser, even though it should be.
   Everything except the tor version is synthetic.

 * `AUTHENTICATE`.  Just returns success since the filtered control
   port does not require authentication.

 * `AUTHCHALLENGE`.  Just returns an error.  See `AUTHENTICATE`.

 * `QUIT`.  Only prior to the `AUTHENTICATE` call.  Not actually used
   by Tor Browser ever.

 * `GETINFO net/listeners/socks`.  torbutton freaks out without this.
   The response synthetically generated to match what torbutton expects.

 * (Optional) `SETEVENTS STREAM`.  Required for the circuit display.
   Events are synthetically generated to only include streams that
   firefox created.

 * (Optional) `GETINFO circuit-status`.  Required for the circuit
   display.  Responses are synthetically generated to only include
   circuits that firefox created.

Denied:

 * Everything else.

> So above, I see a few common patterns:
>   * Many restrictive filters still let the application learn enough
> about the user's behavior to deanonymize them.  If the threat model is
> intended to resist a hostile application, then that application can't
> be allowed to communicate with the outside world, even over Tor.

  "The only truly secure system is one that is powered off, cast in a
   block of concrete and sealed in a lead-lined room with armed guards -
   and even then I have my doubts." -- spaf

>   * The NEWNYM-based side-channel above is a little scary.

I don't think this is solvable while giving the application the ability
to re-generate circuits.  Maybe my modal doom dialog box should run
away from the user's mouse cursor, and play klaxon sounds too.

The use model I officially support is "sandboxed-tor-browser launches a
tor daemon in a separate container dedicated to firefox".  People who
do other things, get what they deserve.

> And where do we go forward from here?

If it were up to me, I'd re-write the circuit display to only show the
exit(s) when applicable, since IMO firefox is not to be trusted with
the IP address of the user's Guard.

But the circuit display when running sandboxed defaults to off, so
people that enable it, presumably fully understand the implications of
doing so.

> The filters above seem to have been created by granting the
> applications only the commands that they actually need, and by
> filtering all the other commands.  But if we'd like filters that
> actually provide some security against hostile applications using the
> control port, we'll need to take a different tactic: we'll need to
> define the threat models that we're trying to work within, and see
> what we can safely expose under those models.

"Via the control port a subverted firefox can get certain information
about what firefox is doing, if the user configures it that way,
otherwise, all it can do is repeatedly NEWNYM" is what I think I ended
up with.

Though I have the 

Re: [tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread meejah
Nick Mathewson  writes:

> But I could be wrong!  Maybe there are subsets that are safer than
> others.

So, I guess the "main" use-case for this stuff would be the current
users of control-port filters (like Subgraph and Whonix; others?).

It seems what these things *really* want is a "limited view" of the One
True Tor. So for example, you don't want to filter on the "command" or
"event" level, but a complete coherent "version" of the Tor state.

As in: see "your" STREAM events, or "your" HS_DESC events etc. Probably
the same for BW or similar events. This is really kind of the
"capability system" you don't want, though ;)

Also, I really don't know exactly what the threat-model is, but it does
seem like a good idea to limit what information a random application has
access to. Ideally, it would know precisely the things it *needs* to
know to do its job (or at least has been given explicit permission by a
user to know). That is a user might click "yes, OnionShare may add onion
services to my Tor" but in reality you have to enable: ADD_ONION, (some)
HS_DESC events, DEL_ONION (but only ones you added), etc. If you really
wanted an "on-disk" one (i.e. via HiddenServiceDir not ADD_ONION), then
you have to allow (at least some) access to SETCONF etc.

Or, maybe you're happy to let that cool visualizer-thing have access to
"read only" events like STREAM, CIRC, BW, etc if you know it's sandboxed
to have zero network access.

> As above, appears to allow HS_DESC events.  It allows "GETINFO
> onions/current", which can expose a list of every onion service
> locally hosted, even those not launched through onionshare.

Doesn't this just show "onions that the current control connection has
added"?

>   * Some applications that care about their own onion services
> inadvertantly find themselves informed about everyone else's onion
> services.  I wonder if there's a way around that?

HS_DESC events include the onion (in args) so could in principle be
filtered by a control-filter to only include events for certain onions
(i.e. those added by "this" control connection). In practice, this is
probably exactly what the application wants anyway.

>  E.  Your thoughts here?

Maybe this is a chance to play with a completely different, but ideally
much better "control protocol for Tor"? The general idea would be that
you have some "trusted" software (i.e. like existing control-port
filters) that on the one side connects to the existing control-port of
Tor (and is thus "completely trusted") but then exposes "the Next Great
Control Protocol" to clients.

Nevertheless, there's still the question of what information to expose
(and how) -- i.e. the threat model, and use-cases.

Of course, the same idea as above could be used except it speaks "Tor
Control Protocol" out both sides -- that is, 'just' a slightly fancier
filter.

> signing-off-before-this-turns-into-a-capabilities-based-system,

Aww, that's what I want ;)

-- 
meejah
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] [prop270] Ideas from Tor Meeting Discussion on Post-Quantum Crypto

2017-04-03 Thread isis agora lovecruft
Nick Mathewson transcribed 2.9K bytes:
> On Fri, Mar 31, 2017 at 10:20 PM, isis agora lovecruft
>  wrote:
> > Hey hey,
> >
> > In summary of the breakaway group we had last Saturday on post-quantum
> > cryptography in Tor, there were a few potentially good ideas I wrote down,
> > just in case they didn't make it into the meeting notes:
> >
> >  * A client should be able to configure "I require my entire circuit to have
> >PQ handshakes" and "I require at least one handshake in my circuits to be
> >PQ".  (Previously, we had only considered having consensus parameters, in
> >order to turn the feature on e.g. once 20% of relays supported the new
> >handshake method.)
> 
> +1 on having something like this happen in some way, -0 on having
> client configuration be the recommended way for any purpose other than
> testing (Having clients behave differently is best avoided.)
> 
> Our usual approach for this kind of thing a consensus parameter that
> can be overridden with a local option.

So it sounds like we want one consensus parameter that is a preference-ordered
list of handshake types to use, e.g. "RecommendedHandshakes 3 2", to turn
on/off usage of a particular handshake.  And we also want a consensus
parameter particular to PQ handshakes, something like "PQHandshakesPerCircuit
{none,one,all}" which only has an effect if "RecommendedHandshakes" includes a
PQ one.

Does that sound like it would give the desired configurability?

> >  * Using stateful hash-based signatures to sign descriptors and/or consensus
> >documents, and (later) if state has been lost or compromised, then 
> > request
> >the last such document submitted to regain state (probably skipping over
> >all the leaves of the last used node in the tree, or the equivalent, to 
> > be
> >safe).  (This requires more concrete design analysis, including the 
> > effects
> >of the large size of hash-based signatures on the directory bandwidth
> >usage, probably in a proposal or longer write up, should someone awesome
> >decides to research this idea further. :)
> 
> Interesting!  I'd hope we do this as a separate proposal.

Yes, as I recall (avoiding naming names so as not to volunteer anyone) others
were interested in looking into this.  A good start would be to come up with
some napkin numbers on the impacts of signature and key sizes.

For anyone interested in exploring this idea, good starting resources for
hash-based signatures:

 * For a light introduction to hash-based signatures, Adam Langley has a good
   blog post. [0]

 * For stateful: "XMSS-T: Mitigating Multi-target Attacks in
   Hash-based Signatures" (2016), by Hülsing, Rijneveld, and Song. [1]

 * For stateless: "SPHINCS: practical stateless hash-based signatures" (2014)
   by Bernstein, Hopwood, Hüsling, Lange, Niederhagen, Papachristodoulou,
   Schwabe, and Zooko. [2]

 * For background/history: Andy Hülsing keeps up-to-date lists of papers
   and recommendations. [3]

> Also my hope is that in our timeline, we prioritize PQ encryption over
> authentication, since PQ encryption provides us forward secrecy
> against future quantum computers, whereas PQ authentication is only
> useful once a sufficient quantum computer exists.
>
> (That's no reason not to think about PQ authentication, but with any
> luck, we can wait a few years for the PQ crypto world to invent some
> even better algorithms.)

Yes, I agree.  (Personally, I'm not inclined to work on this, at least not
any time in the next few years.)

If other people want to do it as a fun research project, though, I think
that's fine, since it wouldn't hurt to have a decent proposal on the table
if/when the "quantum computers we care about are for real" day comes.

Also, for what it's worth, I know Andy is always looking for specific
applications/design constraints for hash-based sigs, since constructions can
often be hand-tailored/optimised.  And, obviously, from an academic-incentives
perspective, this is both a fun problem to solve and a paper.

[0]: https://www.imperialviolet.org/2013/07/18/hashsig.html
[1]: 
https://github.com/isislovecruft/library--/blob/master/cryptography%20%26%20mathematics/post-quantum%20cryptography/XMSS-T:%20Mitigating%20Multi-target%20Attacks%20in%20Hash-based%20Signatures%20(2016)%20-%20H%C3%BClsing%2C%20Rijneveld%2C%20Song.pdf
[2]: 
https://github.com/isislovecruft/library--/blob/master/cryptography%20%26%20mathematics/post-quantum%20cryptography/SPHINCS:%20practical%20stateless%20hash-based%20signatures%20(2014)%20-%20Bernstein%2C%20Hopwood%2C%20Lange%2C%20Wilcox-OHearn%2C%20et.%20al.pdf
[3]: https://huelsing.wordpress.com/hash-based-signature-schemes/literature/

Best,
-- 
 ♥Ⓐ isis agora lovecruft
_
OpenPGP: 4096R/0A6A58A14B5946ABDE18E207A3ADB67A2CDB8B35
Current Keys: https://fyb.patternsinthevoid.net/isis.txt


signature.asc
Description: Digital signature

Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Taylor R Campbell
> Date: Sun, 26 Mar 2017 14:24:41 +0200
> From: Alec Muffett 
> 
> This is a point of significant concern because of issues like phishing and
> passing-off - by analogy: t0rpr0ject.0rg versus torproject.org  - and other
> games that can be played with a prop224 address now, or in future, to game
> user experience.
> [...]
> The result would be onion addresses which are less "tamperable" / more
> deterministic, that closer to one-and-only-one published onion address will
> correspond to an onion endpoint.
> 
> What does the panel think?

What is the threat model an AONT defends against here, and what
security properties do we aim to provide against that threat?

Here are a few candidates.  Suppose I own 0123456789deadbeef2.onion,
where 2 is the onion version number.

T1. Adversary does not know 0123456789deadbeef2.onion but controls all
onion service directories.
(SP1) Adversary can't discover 0123456789deadbeef2.onion or thereby
  distinguish descriptors for 0123456789deadbeef2.onion from other
  descriptors simply by controlling what is in the directories.
  -> With or without AONT, since the onion service descriptors are
 encrypted, the adversary can't learn their content anyway.

T2. Adversary knows 0123456789deadbeef2.onion and controls all Tor
nodes except for the onion service server and client.
(SP2) Adversary cannot impersonate 0123456789deadbeef2.onion.
  -> With or without AONT, adversary can't make onion descriptor
 signatures that are verified by the 0123456789deadbeef2.onion
 key unless they have broken Ed25519.
(SP3) Adversary cannot impersonate 0123456789deadbeefN.onion for any N
  *other* than 2.
  -> With or without AONT, if the signature on the onion
 descriptor always covers the complete .onion address,
 including the version number, the adversary can't do this
 without also being able to forge signatures for
 0123456789deadbeef2.onion anyway and thus break Ed25519.
(SP4) Adversary cannot DoS 0123456789deadbeef2.onion.
  -> With or without AONT, if adversary knows legitimate .onion
 address key, they can already remove any onion descriptors
 with signatures verified by the .onion address key, even if
 the signatures are decrypted.  So we can't provide this
 security property anyway as long as the adversary knows the
 legitimate .onion address.

T3. Adversary
(a) knows 0123456789deadbeef2.onion,
(b) can spend compute to find a private key whose public key has
some chosen bits, and
(c) can submit descriptors to onion directories.
(SP5) Adversary cannot match all except replacement of l by 1, o by 0, 
  -> With or without AONT, this confusion is already excluded by
 base32 encoding.
(SP6) Adversary cannot match all except long enough suffix.
  -> Finding priv to fix prefix of Ed25519_priv2pub(priv) || cksum
 is almost surely just as hard as finding priv to fix prefix
 of AONT(Ed25519_priv2pub(priv) || cksum || version) or any
 other arrangement of cksum and version.

 (This assumes the AONT has low AT cost to evaluate -- but if
 you choose an AONT with high AT cost, that will severely
 penalize legitimate users of onion services, and also limit
 vanity onions to major corporations like Facebook and
 Google.)

So what security properties does an AONT give against what threat
models?  I'm probably missing something obvious here, but I expect it
will be helpful to articulate exactly what function it serves, for
future readers.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Control-port filtering: can it have a reasonable threat model?

2017-04-03 Thread Nick Mathewson
Hi!

As you may know, the Tor control port assumes that if you can
authenticate to it, you are completely trusted with respect to the Tor
instance you have authenticated to.  But there are a few programs and
tools that filter access to the Tor control port, in an attempt to
provide more restricted access.

When I've been asked to think about including such a feature in Tor in
the past, I've pointed out that while filtering commands is fairly
easy, defining a safe subset of the Tor control protocol is not.  The
problem is that many subsets of the control port protocol are
sufficient for a hostile application to deanonymize users in
surprising ways.

But I could be wrong!  Maybe there are subsets that are safer than others.

Let me try to illustrate. I'll be looking at a few filter sets for example.
=
Filters from https://github.com/subgraph/roflcoptor/filters :

1. gnome-shell.json

This filter allows "SIGNAL NEWNYM", which can potentially be used to
deanonymize a user who is on a single site for a long time by causing
that user to rebuild new circuits with a given timing pattern.

2. onioncircuits.json

Allows "GETINFO circuit-status" and "GETINFO stream-status", which
expose to the application a complete list of where the user is
visiting and how they are getting there.

3. onionshare-gui.json

Allows "SETEVENTS HS_DESC", which is exposes to the application every
hidden service which the user is visiting.

4. ricochet.json

Allows "SETEVENTS HS_DESC", for which see "onionshare-gui" above.

5. tbb.json

Allows "SETEVENTS STREAM" and "GETINFO circuit-status", for which see
"onioncircuits" above.

=
Filters from 
https://git-tails.immerda.ch/tails/tree/config/chroot_local-includes/etc/tor-controlport-filter.d
:

1. onioncircuits.yml

See onioncircuits.json above; it allows the same GETINFO stuff.

2. onionshare.yml

As above, appears to allow HS_DESC events.  It allows "GETINFO
onions/current", which can expose a list of every onion service
locally hosted, even those not launched through onionshare.

3. tor-browser.yml

As "tbb.json" above.

4. tor-launcher.yml

Allows setconf of bridges, which allows the app to pick a hostile
bridge on purpose.  Similar issues with Socks*Proxy.  The app can also
use ReachableAddresses to restrict guards on the .

Allows SAVECONF, which lets the application make the above changes
permanent (for as long as the torrc file is persisted)
=

So above, I see a few common patterns:
  * Many restrictive filters still let the application learn enough
about the user's behavior to deanonymize them.  If the threat model is
intended to resist a hostile application, then that application can't
be allowed to communicate with the outside world, even over Tor.

  * Many restrictive filters block SETCONF and SAVECONF.  These two
changes together should be enough to make sure that a hostile
application can only deanonymize _current_ traffic, not future Tor
traffic. Is that the threat model?  It's coherent, at least.

  * Some applications that care about their own onion services
inadvertantly find themselves informed about everyone else's onion
services.  I wonder if there's a way around that?

  * The NEWNYM-based side-channel above is a little scary.


And where do we go forward from here?

The filters above seem to have been created by granting the
applications only the commands that they actually need, and by
filtering all the other commands.  But if we'd like filters that
actually provide some security against hostile applications using the
control port, we'll need to take a different tactic: we'll need to
define the threat models that we're trying to work within, and see
what we can safely expose under those models.

Here are a few _possible_ models we could think about, but I'd like to
hear from app developers and filter authors and distributors more
about what they think:

 A. Completely trusted controller.  (What we have now)

 B. Controller is untrusted, but is blocked from exfiltrating information.
B.1. Controller can't connect to the network at all.
B.2. Controller can't connect to the network except over tor.

 C. Controller is trusted wrt all current private information, but
future private information must remain secure.

 D. Controller is trusted wrt a fraction of the requests that the
clients are handling. (For example, all requests going over a single
SOCKSPort, or all ADD_ONION requests that it makes itself.)

 E.  Your thoughts here?




signing-off-before-this-turns-into-a-capabilities-based-system,
-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Roger Dingledine
On Mon, Apr 03, 2017 at 10:48:26AM -0400, Ian Goldberg wrote:
> The other thing to remember is that didn't we already say that
> 
> facebookgbiyeqv3ebtjnlntwyvjoa2n7rvpnnaryd4a.onion
> 
> and
> 
> face-book-gbiy-eqv3-ebtj-nlnt-wyvj-oa2n-7rvp-nnar-yd4a.onion
> 
> will mean the same thing?

Did we? I admit that I haven't been paying enough attention to anything
lately, but last I checked, we thought that was a terrible idea because
people can make a bunch of different versions of the address, and use
them as tracking mechanisms for users. (For example, I put two versions
of the same address on my two different pages, and now when somebody goes
to that onion address, I can distinguish which page they came from. In
the extreme versions of this idea, I give a unique version of my address
to the target, and then I can spot him when he uses it.)

Ultimately the problem is that the browser is too good at giving away
the hostname that it thinks it's going to -- in various headers, in
cross-site isolation, etc etc.

So, if we have indeed decided to allow many versions of format for
onion addresses, I hope we thought through this attack and decided it
was worth it. :)

--Roger

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Alec Muffett
Following the Layer-2 Addressing analogy means that Ian, here:


> If the daily descriptor uploaded to the point
>> Hash(onionaddr, dailyrand) contained Hash(onionaddr, dailyrand) *in* it
>> (and is signed by the master onion privkey, of course), then tor
>> could/should check that it reached that location through the "right"
>> onion address.
>
>
…has essentially just invented what Solaris (for one) calls "IP Strict
Destination Multihoming":

  http://www.informit.com/articles/article.aspx?p=101138=4

-a :-)


-- 
http://dropsafe.crypticide.com/aboutalecm
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Alec Muffett
On 3 April 2017 at 16:59, Ian Goldberg  wrote:

> How about this, though: I know that Tor doesn't want to be in the business
> > of site reputation, but what if (eg) Protonmail offers a Onion "Safe
> > Browsing" extension some day, of known-bad Onions for malware reasons?


> That's a quite good motivating example, thanks!


#Yay; I'm also thinking of other plugins (in the cleartext world,
HTTPSEverywhere is the best example) which provide value to the user by
mechanically mutating URIs which match some canonical DNS domain name;
because Onion addresses are more like Layer-2 addresses*, development of
similar plugins benefits greatly from enforced "canonicality" (sp?) than is
necessary for equally-functional DNS equivalents; there is no means to
"group" three disparate Onion addresses together just-because they are all
owned by (say: Facebook), and if each address has 8 possible
representations then that's 24 rules to match against...


> There's quite a gulf between stripping hyphens from a candidate onion
> > address and doing strcmp(), versus either drilling into the candidate
> > address to compute the alternative forms to check against the blacklist,
> or
> > even requiring the blacklist to be 8x larger?
>
> Yes, that's true.  I'm definitely in favour of the "multiply by L (the
> order of the group) and check that you get the identity element; error
> with 'malformed address' if you don't" to get rid of the torsion point
> problem.
>

I heard that and AMS and it sounds a fabulous idea, although I am still too
much of an EC noob to appreciate it fully. :-)


If the daily descriptor uploaded to the point
> Hash(onionaddr, dailyrand) contained Hash(onionaddr, dailyrand) *in* it
> (and is signed by the master onion privkey, of course), then tor
> could/should check that it reached that location through the "right"
> onion address.
>

That sounds great, and I think it sounds an appropriate response, but again
I am a Prop224 and EC noob. :-)

I would like, for two paragraph, to go entirely off-piste and ask a
possibly irrelevant and probably wrong-headed question:

/* BEGIN PROBABLY WRONG SECTION */
I view Onions as Layer-2 addresses, and one popular attack on Ethernet
Layer 2 is ARP-spoofing.  Imagine $STATE_ACTOR exfiltrates the private key
material from $ONIONSITE and wants to silently and partially MITM the
existing site without wholesale owning or tampering with it. Can they make
any benefit from multiple ("hardware MAC-address") keys colliding to one
address? Is there any greater benefit to $STATE_ACTOR from this than (say)
publishing lots of fake/extra introduction points for $ONIONSITE and using
those to interpose themselves into communications?
/* END PROBABLY WRONG SECTION */


I'm afraid the details of what's in that daily descriptor are not in my
> brain at the moment.  Does it contain its own (daily blinded) name under
> the signature?
>

  George?

  -a

--
* Layer-2 analogy: https://twitter.com/AlecMuffett/status/802161730591793152


-- 
http://dropsafe.crypticide.com/aboutalecm
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Ian Goldberg
On Mon, Apr 03, 2017 at 04:40:52PM +0100, Alec Muffett wrote:
> On 3 Apr 2017 3:48 p.m., "Ian Goldberg"  wrote:
> 
> The other thing to remember is that didn't we already say that
> 
> facebookgbiyeqv3ebtjnlntwyvjoa2n7rvpnnaryd4a.onion
> 
> and
> 
> face-book-gbiy-eqv3-ebtj-nlnt-wyvj-oa2n-7rvp-nnar-yd4a.onion
> 
> will mean the same thing?  So we're already past the "one (st)ring to
> rule them all" point?
> 
> 
> That's a great point, and I'm definitely interested and in favour of
> readability.
> 
> How about this, though: I know that Tor doesn't want to be in the business
> of site reputation, but what if (eg) Protonmail offers a Onion "Safe
> Browsing" extension some day, of known-bad Onions for malware reasons?

That's a quite good motivating example, thanks!

> There's quite a gulf between stripping hyphens from a candidate onion
> address and doing strcmp(), versus either drilling into the candidate
> address to compute the alternative forms to check against the blacklist, or
> even requiring the blacklist to be 8x larger?

Yes, that's true.  I'm definitely in favour of the "multiply by L (the
order of the group) and check that you get the identity element; error
with 'malformed address' if you don't" to get rid of the torsion point
problem.

If the daily descriptor uploaded to the point
Hash(onionaddr, dailyrand) contained Hash(onionaddr, dailyrand) *in* it
(and is signed by the master onion privkey, of course), then tor
could/should check that it reached that location through the "right"
onion address.

I'm afraid the details of what's in that daily descriptor are not in my
brain at the moment.  Does it contain its own (daily blinded) name under
the signature?

   - Ian
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Alec Muffett
On 3 Apr 2017 3:48 p.m., "Ian Goldberg"  wrote:

The other thing to remember is that didn't we already say that

facebookgbiyeqv3ebtjnlntwyvjoa2n7rvpnnaryd4a.onion

and

face-book-gbiy-eqv3-ebtj-nlnt-wyvj-oa2n-7rvp-nnar-yd4a.onion

will mean the same thing?  So we're already past the "one (st)ring to
rule them all" point?


That's a great point, and I'm definitely interested and in favour of
readability.

How about this, though: I know that Tor doesn't want to be in the business
of site reputation, but what if (eg) Protonmail offers a Onion "Safe
Browsing" extension some day, of known-bad Onions for malware reasons?

There's quite a gulf between stripping hyphens from a candidate onion
address and doing strcmp(), versus either drilling into the candidate
address to compute the alternative forms to check against the blacklist, or
even requiring the blacklist to be 8x larger?

-a
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - unMessage: a privacy enhanced instant messenger

2017-04-03 Thread Felipe Dau
Here is an update with the final proposal I submitted to GSoC.

Thanks,
-Felipe

# unMessage: an anonymity enhanced instant messenger

In an age where spying, surveillance and censorship evidently became
regular practices by various kinds of attackers, it is sensible to be
concerned about instant messaging applications, which are very popular
communication tools that handle private and identifying information.
Such a scenario demands solutions to prevent users from harm these
attacks might cause.

There are currently good solutions such as [Signal], [Wire] and
[OMEMO] apps that make end-to-end encrypted conversations possible.
Although such apps successfully provide privacy, they have a great
dependency on servers and metadata in order to work and they are not
able to provide anonymity. An app that solves this problem is
[Ricochet], by not having such dependencies. However, it heavily
relies on the transport it uses and does not offer its own encryption
layer.

[unMessage] is also one of those solutions: a peer-to-peer anonymity
enhanced instant messenger written in Python that I have been working
on for a while with [David Andersen] - my advisor. unMessage uses its
own end-to-end encrypted [protocol] to maintain conversations,
focusing in not depending on servers, metadata and transport. We have
recently released an alpha version which should be easy for developers
to install and test its current features such as message exchanges,
authentication, and voice chat, but there is still a lot of work to do
in order to achieve a mature state where users can trust it due to its
properties and usability. As we believe unMessage has potential to
become a great anonymity enhancing app with a code that is simple,
readable and therefore easy to maintain, I propose to work on it
during this year's Google Summer of Code with the support of the Tor
Community to to make it closer to maturity. We expect to implement
fixes, improvements and features from our discussions (on its
[tracker] and [tor-dev]) in order to turn it into a maintainable,
feature-rich and useful app which everyone can benefit
from.

## Technologies

unMessage's features were possible with the use of the following
technologies:

- Transport makes use of [Twisted], [Tor Onion Services] and
  [txtorcon]

- Encryption is performed using the [Double Ratchet Algorithm]
  implemented in [pyaxo] - which uses [PyNaCl]

- Authentication makes use of the [Socialist Millionaire Protocol]
  implemented in [Cryptully]

- Transport metadata is minimized by Tor and application metadata by
  the unMessage [protocol]

- User interfaces are created with [Tkinter] for the [GUI] and
  [curses] for the [CLI]

- Voice chat uses the [Opus codec] for constant bitrate encoding

## Contributions

Since its current (alpha) release, we have been discussing it with
[Patrick Schleizer] and [HulaHoop] from [Whonix], who are making
great contributions to help us test it, as well as suggesting new
features and improvements. We are also working to run it on Whonix
(which will allow it to be run on Tails as well) with help from
[meejah] by adding a new feature to txtorcon to make unMessage (and
all the apps that use txtorcon) "Control Port Filter friendly".

Since the introduction of this project for GSoC, [dawuud] and [meejah]
became interested in contributing and mentoring it and also assisted
me on making this proposal.

## Tasks

The project is split into tasks, each assigned an ID (in parenthesis)
that is used to compose the timeline. I have been generous with how
much time each task will demand and I am also leaving the whole week
of each evaluation to review and make sure the deliverables meet
expectations. Therefore, it is possible that I am able to work on
additional tasks in case they consume less time than planned.

### Improve setup script (T1)

This task will improve unMessage's `setup.py` by removing redundant
package metadata, use files for the requirements and offer development
requirements. This task will be tracked in [issue 35].

### Use attrs (T2)

[attrs] is used to simplify the code by removing boilerplate, make it
more concise, and consequently improve its quality. Classes
definitions will be modified to use attrs' declarations so that
attributes have default types and values, as well as validation. This
task will be tracked in [issue 34].

### Support file transfer (T3)

unMessage is able to support various elements of a conversation such
as requests, messages and authentication. New elements to transmit
file requests and the actual files will be added and handle by the
elements parser. This task will be tracked in [issue 12].
 
### Add a logger (T4)

There is currently no logging being done and in order to debug, the
only possible approach is using the UIs. A module will be added to
send logs to the terminal and a file. This task will be tracked in
[issue 30].

### Make functions/methods asynchronous (T5)

unMessage's initial implementation did not use Twisted 

Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Alec Muffett
On 3 April 2017 at 13:04, George Kadianakis  wrote:

> I'm calling it weird because I'm not sure how an
> attacker can profit from being able to provide two addresses that
> correspond to the same key, but I can probably come up with a few
> scenarios if I think about it.


Hi George!

I'll agree it's a weird edge case :-)

I think the reason my spider-sense is tingling is because years of cleaning
up after intrusions has taught me that sysadmins and human beings are very
bad at non-canonical address formats, especially where they combine them
with either blacklisting, or else case-statements-with-default-conditions.

If one creates scope for saying "the address is .onion but you can
actually use .onion or .onion which are equivalent" - then
someone will somehow leverage that either a) for hackery, or b) for social
engineering.

Compare:

* http://01770001
* http://2130706433
* http://0177.0.0.1  <- this one tends to surprise people
* http://127.0.0.1

…and the sort of fun shenanigans that can be done with those "equivalent
forms"

People who've been trained not to type [X] into their browser, might be
convinced to type [X']

It's a lot easier for people to cope with there being one-and-only-one
viable form for any given hostname or address-representation.

-a

 --
http://dropsafe.crypticide.com/aboutalecm
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC 2017 - Project "Crash Reporter for Tor Browser"

2017-04-03 Thread Nur-Magomed
Tom, thanks for review,
I've sent the proposal final version through gsoc site.

__

>It would be cool to build the browser with https://github.com/google/sani
tizers this way you could get bug reports for bugs that don't >panic the
browser

Hi Antonio,
Thanks for your reply!
I've add it to the proposal as optional.


2017-04-03 8:42 GMT+03:00 Antonio Groza :

> It would be cool to build the browser with https://github.com/google/sani
> tizers this way you could get bug reports for bugs that don't panic the
> browser
>
> Il lun 3 apr 2017, 07:10 Tom Ritter  ha scritto:
>
>> On 1 April 2017 at 09:22, Nur-Magomed  wrote:
>> > Hi Tom,
>> > I've updated Proposal[1] according to your recommendations.
>> >
>> > 1) https://storm.torproject.org/grain/ECCJ3Taeq93qCvPJoWJkkY/
>>
>> Looks good to me!
>>
>> > 2017-03-31 19:46 GMT+03:00 Tom Ritter :
>> >>
>> >> On 31 March 2017 at 10:27, Nur-Magomed  wrote:
>> >> >> I think we'd want to enhance this form. IIRC the 'Details' view is
>> >> >> small and obtuse and it's not easy to review. I'm not saying we
>> >> >> _should_ create these features, but here are a few I brainstormed:
>> >> >
>> >> > Yes, actually that form only shows "Key: Value" list, we can break it
>> >> > down
>> >> > in several GroupBoxes which consist of grouped data field and
>> checkboxes
>> >> > to
>> >> > include.
>> >> >
>> >> >> Let's try and avoid GDocs if you don't mind :)
>> >> >
>> >> > Sorry :) I already registered on storm, but I had no access to
>> create.
>> >> > Thanks for review, I'll update proposal accordint to your requiments.
>> >>
>> >> No worries.
>> >>
>> >> > And question: could we throw Windows or MacOS or both versions from
>> >> > timeline, and develop them after summer?
>> >>
>> >> Yes, I think that's fine. I think getting one platform to completion
>> >> would be a great accomplishment and would lay the groundwork and
>> >> improve the momentum to getting the subsequent platforms there.
>> >>
>> >> -tom
>> >> ___
>> >> tor-dev mailing list
>> >> tor-dev@lists.torproject.org
>> >> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>> >
>> >
>> >
>> > ___
>> > tor-dev mailing list
>> > tor-dev@lists.torproject.org
>> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>> >
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] [prop270] Ideas from Tor Meeting Discussion on Post-Quantum Crypto

2017-04-03 Thread Nick Mathewson
On Fri, Mar 31, 2017 at 10:20 PM, isis agora lovecruft
 wrote:
> Hey hey,
>
> In summary of the breakaway group we had last Saturday on post-quantum
> cryptography in Tor, there were a few potentially good ideas I wrote down,
> just in case they didn't make it into the meeting notes:
>
>  * A client should be able to configure "I require my entire circuit to have
>PQ handshakes" and "I require at least one handshake in my circuits to be
>PQ".  (Previously, we had only considered having consensus parameters, in
>order to turn the feature on e.g. once 20% of relays supported the new
>handshake method.)

+1 on having something like this happen in some way, -0 on having
client configuration be the recommended way for any purpose other than
testing (Having clients behave differently is best avoided.)

Our usual approach for this kind of thing a consensus parameter that
can be overridden with a local option.


>  * Using stateful hash-based signatures to sign descriptors and/or consensus
>documents, and (later) if state has been lost or compromised, then request
>the last such document submitted to regain state (probably skipping over
>all the leaves of the last used node in the tree, or the equivalent, to be
>safe).  (This requires more concrete design analysis, including the effects
>of the large size of hash-based signatures on the directory bandwidth
>usage, probably in a proposal or longer write up, should someone awesome
>decides to research this idea further. :)

Interesting!  I'd hope we do this as a separate proposal.

Also my hope is that in our timeline, we prioritize PQ encryption over
authentication, since PQ encryption provides us forward secrecy
against future quantum computers, whereas PQ authentication is only
useful once a sufficient quantum computer exists.

(That's no reason not to think about PQ authentication, but with any
luck, we can wait a few years for the PQ crypto world to invent some
even better algorithms.)

peace,
-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Rethinking Bad Exit Defences: Highlighting insecure and sensitive content in Tor Browser

2017-04-03 Thread David Goulet
On 28 Mar (11:19:45), Tom Ritter wrote:
> It seems reasonable but my first question is the UI. Do you have a
> proposal?  The password field UI works, in my opinion, because it
> shows up when the password field is focused on. Assuming one uses the
> mouse to click on it (and doesn't tab to it from the username) - they
> see it.
> 
> How would you communicate this for .onion links or bitcoin text? These
> fields are static text and would not be interacted with in the same
> way as a password field.
> 
> A link could indeed be clicked - so that's a hook for UX... A bitcoin
> address would probably be highlighted for copying so that's another
> hook... But what should it do?

I do believe this could be an important safety improvement even if not
perfect. I'm unsure how Tor Browser team operates for this kind of features
but Tom's request here is a logical start that is try to come up with a
proposal of what the UI would look like and then we can go in ticket land I
guess...

I'm no UI expert nor even good at judging them but I have a feeling we should
go towards something "intrusive" in order to make SURE users notice the
potential danger and actually gets annoyed by it to the point they want to
*avoid* HTTP sites in order to not deal with that.

nusenu idea of going like NoScript does is appealing to me. Covering the
.onion/bitcoin address on HTTP clearnet site and then you have to click on it
to see it with a big ass warning saying "Make sure you understand that this
address could have been changed in transit" kind of thing (with very low
technical terms ofc).

That way, users will clearly see that getting an address on an HTTP site is
_harmful_ over Tor and that should be what we convey to the users using that
annoying mechanism.

Might sound kind of radical here but safety first! I really don't see a
compromise nor an argument for "usability" here if we believe that it's
basically dangerous.

Cheers!
David

> 
> -tom
> 
> 
> On 28 March 2017 at 10:31, Donncha O'Cearbhaill  wrote:
> > Hi all,
> >
> > The Tor bad-relay team regularly detects malicious exit relays which are
> > actively manipulating Tor traffic. These attackers appear financial
> > motivated and have primarily been observed modifying Bitcoin and onion
> > address which are displayed on non-HTTPS web pages.
> >
> > Increasingly these attackers are becoming more selective in their
> > targeting. Some attackers are only targeting a handful of pre-configured
> > pages. As a result, we often rely on Tor users to report bad exits and
> > the URLs which are being targeted.
> >
> > In Firefox 51, Mozilla started to highlight HTTP pages containing
> > password form fields as insecure [1]. This UI clearly and directly
> > highlights the risk involved in communicating sensitive data over HTTP.
> >
> > I'd like to investigate ways that we can extend a similar UI to Tor
> > Browser which highlight Bitcoin and onion addressed served over HTTP. I
> > understand that implementing this type of Bitcoin and onion address
> > detection would be less reliable than Firefox's password field
> > detection. However even if unreliable it could increase safety and
> > increase user awareness about the risks of non-secure transports.
> >
> > There is certainly significant design work that needs to be done to
> > implement this feature. For example, .onion origins need be treated as
> > secure, but only if they don't included resources from non-secure
> > origins. We would also need to make the onion/bitcoin address detection
> > reliable against active obfuscation attempts by malicious exits.
> >
> > I'd like to hear any and all feedback, suggestions or criticism of this
> > proposal.
> >
> > Kind Regards,
> > Donncha
> >
> >
> > [1]
> > https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
> >
> >
> >
> > ___
> > tor-dev mailing list
> > tor-dev@lists.torproject.org
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> >
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

-- 
h4Neylkd5WBoXhbKp3jB2fYUAy2NrRar7O7oyNaGg4M=


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread Ian Goldberg
On Mon, Apr 03, 2017 at 03:04:47PM +0300, George Kadianakis wrote:
> Hey people,
> 
> thanks for the R here. I'm currently trying to balance the tradeoffs
> here and decide whether to go ahead and implement this feature.
> 
> My main worry is the extra complexity this brings to our address
> encoding/decoding process and to our speficication, as well as when
> explaining the scheme to people.
> 
> Other than that, this seems like a reasonable improvement for a weird
> phishing scenario. I'm calling it weird because I'm not sure how an
> attacker can profit from being able to provide two addresses that
> correspond to the same key, but I can probably come up with a few
> scenarios if I think about it. Furthermore, this solution assumes a
> sloppy victim that does a partial spot-check (if the victim verified the
> whole address this design would make no difference).
> 
> BTW, isn't this phishing threat also possible in bitcoin (which is also
> using a 4-byte checksum that can be bruteforced)? Have there been any
> attacks of this nature?
> 
> Anyhow my first intuition is to just do this, as it seems like an
> improvement and it's probably not a huge amount of work. It can probably
> be done pretty cleanly if we abstract away the whole AONT construction
> and the custom-ish base32 encoding/decoding. I'm just worrying about
> putting more stuff in our already overloaded development bucket.
> 
> Is there a name for this AONT construction btw?

As my student Nik noticed, this isn't *technically* an AONT, since
diffusion only happens "to the left", but that's where we want to
randomize things if any bit of the address changes.

But if we're down to just pubkey + checksum + *1 bit of version*, then
I'm not totally sold on the point of the AONT, since there are exactly 0
bits that can be twiddled while not changing the pubkey.  *Note*: this
is assuming that if we ever change the version number, *then* we do an
AONT or something so that version 0 and version 1 addresses that have
the same pubkey end up looking totally different (at least at the left
end).
-- 
Ian Goldberg
Professor and University Research Chair
Cheriton School of Computer Science
University of Waterloo
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Comments on proposal 279 (Name API)

2017-04-03 Thread Nick Mathewson
On Mon, Apr 3, 2017 at 8:20 AM, George Kadianakis  wrote:
> Nick Mathewson  writes:
>> Section 2.1 and elsewhere:
>>
>> I suggest that we require all address suffixes to end with .onion;
>> other TLDs are not reserved like .onion is, and maybe we shouldn't
>> squat any we haven't squatted already.   I think we might also want to
>> have all output addresses end with .onion too.
>>
>> I suggest  also that we might want to reserve part of the namespace
>> for standardized namespaces and some for experimental or local ones.
>> Like, if we standardize on namecoin that could be .bit.onion, but if
>> we don't, it could be .bit.x.onion.
>>
>
> I have mixed feelings about keeping the .onion suffix.
>
> One one hand it seems like The Right Thing to do, since we managed to
> get .onion standarized in the IETF which comes with various
> benefits. Also, starting to squat other tlds arbitrarily seems like a
> silly thing to do.
>
> However, I also dislike asking users to visit something.bit.onion
> instead of something.bit, since people are not used to the second tld
> having a semantic meaning, and I can imagine people getting very
> confused about what it means.

Indeed.  And I'm not only concerned about people becoming confused: I
am also worried about confused programs.

Right now, it is easy to answer the question "will Tor handle this
address specially" -- the only special addresses are the ones ending
with ".onion", and the legacy suffices ".exit" and ".noconnect" as
documented as address-spec.txt.  But if we allowed arbitrary TLDs in
this proposal, then _any_ hostname would potentially be an address
that Tor would handle specially.

-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] GSoC: Questions on allowing for more DNS request types

2017-04-03 Thread Nick Mathewson
On Sun, Apr 2, 2017 at 10:20 AM, Lucille Newman  wrote:
> Hello,
>
> I was interested in the project for allowing any kind of DNS support in Tor
> for GSoC, or, since it is late for that deadline, then also otherwise. After
> reading proposal 219, I have some questions.
>
> 1. A comment by NM suggests that we should specify exact behavior when
> generating DNS packets (line 56). Should the DNS packets not be generated as
> according to RFC 1035? Are there other things that need to be taken into
> consideration here?

HI!

The issue is that RFC 1035 and other DNS RFCs allow a certain amount
of latitude in how DNS requests are encoded specifically.  As one
simple example: name compression is recommended but not required. I
believe there are other examples too.

On the request side, that's bad for anonymity: we'd rather have all
clients encoding their requests in the same way, so that exits can't
tell them apart any more than necessary.

On the response side, I think it's okay to have different exits encode
responses differently.

> 2. Another comment (line 63) asks whether 496 bytes is enough for the DNS
> packet of a DNS_BEGIN cell. Since QNAME can be arbitrarily long, I suppose
> it is possible that 496 is not enough? If this seems like a reasonable
> concern, then maybe we could do a similar thing to the DNS_RESPONSE cells
> with allowing multiple cells for a single question and having a flag to
> indicate the last cell?

That would probably be fine.

> 3. What would cause a DNS_BEGIN request or response to be aborted (line
> 105)?

It might make sense to abort a request if the client realizes that the
application no longer wants it -- for example, if it's happening in
response to a TCP DNS request (not currently supported on the client
side) and the TCP connection is closed.

I don't know if it's absolutely necessary to support that.

> 4. How do we differentiate special names like .onion, .exit, .noconnect
> (line 145)?

I think we could go with the list in addr-spec.txt in the torspec repository.

> 5. The comments at (lines 135-143) indicate that it might not be necessary
> or practical to refuse requests that resolve to local addresses. This means
> that such queries will not be sent, but an error will be returned before
> sending to a DNS server?

I think that's the intended behavior, if it makes good security sense.


Peace,
-- 
Nick
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Comments on proposal 279 (Name API)

2017-04-03 Thread George Kadianakis
Nick Mathewson  writes:

> Hi !  I'll make some comments here on the draft onion naming API at
>
> https://gitweb.torproject.org/torspec.git/tree/proposals/279-naming-layer-api.txt
>
> (Some of  these are probably things you already meant, or already said
> elsewhere.)
>

Thanks for the timely comments! I'm replying to this thread with my
thoughts, but I didn't have time to actually fix the proposal. I'll do
that in The Future.

>
>
> Section 2.1 and elsewhere:
>
> I suggest that we require all address suffixes to end with .onion;
> other TLDs are not reserved like .onion is, and maybe we shouldn't
> squat any we haven't squatted already.   I think we might also want to
> have all output addresses end with .onion too.
>
> I suggest  also that we might want to reserve part of the namespace
> for standardized namespaces and some for experimental or local ones.
> Like, if we standardize on namecoin that could be .bit.onion, but if
> we don't, it could be .bit.x.onion.
>

I have mixed feelings about keeping the .onion suffix.

One one hand it seems like The Right Thing to do, since we managed to
get .onion standarized in the IETF which comes with various
benefits. Also, starting to squat other tlds arbitrarily seems like a
silly thing to do.

However, I also dislike asking users to visit something.bit.onion
instead of something.bit, since people are not used to the second tld
having a semantic meaning, and I can imagine people getting very
confused about what it means.

Anyhow, it seems like maintaining the .onion suffix is the right
approach here.

> I finally suggest that we distinguish names that are supposed to be
> global from ones that aren't.
>
> Section 2.3:
>
> How about we require that the suffixes be distinct?  If we do that, we
> can drop this "priority" business and we can make the system's
> behavior much easier to understand and explain.
>

Definitely agreed on this simplification suggestion. The priority
feature has confused people, and it's not that useful. In the future we
could reinstall it if we consider it practical.

> Let's require that the TLDs actually begin with a dot.  (That is, I
> think that ".foo.onion" can include "bar.foo.onion", but I don't like
> the idea of "foo.onion" including "barfoo.onion".)
>

Makes sense.

>
> Section 2.3.1:
>
> Does the algorithm apply recursively?  That is, can more then one
> plugin rewrite the same address, or can one plugin rewrite its own
> output?
>
> (I would suggest "no".)
>

Agreed no. We should specify it.

> I think there should be a way for a plugin to say "This address
> definitely does not exist" and stop resolution.  Otherwise no plugin
> can be authoritative over a TLD.
>

Agreed.

> Section 2.5.1:
>
> Is the algorithm allowed to produce non-onion addresses?  Should it be?
>

I'd say no. We should specify this. 

> Must query IDs be unique?  Over what scope must they be unique? Who
> enforces that?
>

I think the NS API client should enforce that, and maybe the server
should throw an error if it's not unique.

We should specify.

> May query IDs be negative?  Can they be arbitrarily large?
>

We should specify this too.

> I think result should indeed be optional on failure.
>
> Section 2.5.1 and 2.5.2:
>
> We should specify what exactly clients and plugins will do if they
> receive an unrecognized message, or a malformed message.
>

Agreed.

> Section 2.5.3.
>
> See security notes on caching below; client-side caching can lead to
> undesirable results.
>

Agreed.

> As noted above, I agree with requiring all result addresses to be .onion.
>
> Section 3.1:
>
> I prefer the "put everything under .onion" option.   I also think that
> we should require that the second-level domain be 10 characters or
> less, to avoid confusion with existing onion addresses.
>

We should think more about this, but seems reasonable.

>
>
> General questions:
>
> I know we've done stdout/stdin for communication before, but I wonder
> if we should consider how well it's worked for us.  The portability on
> Windows can be kind of hard.
>
> Two alternatives are TCP and named pipes.
>
> Another alternative might be just using the DNS protocol and asking
> for some kind of "ONION_CNAME" record.  (DNS is ugly, but at least
> it's nice and standard.)
>

Yup, I think this is an _important_ open part of the proposal that we
should figure out sooner than later. Ideally, we should consult Nathan
or mtigas or other members of our mobile team. I wish I had done this
during the dev meeting...

TCP seems like a plausible alternative here. Unfortunately, we will have
to invent a new protocol for that tho.

>
> Security notes:
>
> I'd like to know what the browser people think about the risks here of
> (eg) probing to see whether the user has certain extensions installed
> or names mapped.  Maybe .hosts.onion should only be allowed in the
> address bar, not in HREF attributes et al?
>

Yep, David F. also mentioned this problem. We should think of how to

Re: [tor-dev] Tor in a safer language: Network team update from Amsterdam

2017-04-03 Thread ng0
z...@manian.org transcribed 12K bytes:
> Rust seems like the best available choice for Tor in a safer language.
> 
> Rust has several issues with securely obtaining a Rust toolchain that the
> Tor community should be attentive to.

Interesting development, but logical. Leaving the obvious issues
(bootstrap, etc) aside:

Will you stick to stable features? From a package maintainers position
it is generally unacceptable (and hard) to follow (and maintain)
nightly/unstable releases of a programming language. Rust stable has
proven features which are expected to stick around for a reliable long
time (at least that is my understanding).

> Rust is a self hosted compiler. Building Rust requires obtaining binaries
> for a recent Rust compiler. The Rust toolchain is vulnerable to a "trusting
> trust" attack. Manish made a prototype and discussed future mitigations.[0]
> 
> The Rust toolchain is built by an automated continuous integration system
> and distributed without human verification or intervention. Rust's build
> artifacts distributed by the RustUp tool are only authenticated by TLS
> certificates. RustUp Github issue 241 discusses a mitigation to address
> some of these concerns but development seems to be stalled.[1]
> 
> 
> [0]
> https://manishearth.github.io/blog/2016/12/02/reflections-on-rusting-trust/
> [1] https://github.com/rust-lang-nursery/rustup.rs/issues/241
> 
> 
> 
> On Fri, Mar 31, 2017 at 2:23 PM Sebastian Hahn 
> wrote:
> 
> > Hi there tor-dev,
> >
> > as an update to those who didn't have the chance to meet with us in
> > Amsterdam or those who haven't followed the efforts to rely on C less,
> > here's what happened at the "let's not fight about Go versus Rust, but
> > talk about how to migrate Tor to a safer language" session and what
> > happened after.
> >
> > Notes from session:
> >
> > We didn't fight about Rust or Go or modern C++. Instead, we focused on
> > identifying goals for migrating Tor to a memory-safe language, and how
> > to get there. With that frame of reference, Rust emerged as a extremely
> > strong candidate for the incremental improvement style that we
> > considered necessary. We were strongly advised to not use cgo, by people
> > who have used it extensively.
> >
> > As there are clearly a lot of unknowns with this endeavor, and a lot
> > that we will learn/come up against along the way, we feel that Rust is a
> > compelling option to start with,  with the caveat that we will first
> > experiment, learn from the experience, and then build on what we learn.
> >
> > You can also check out the session notes on the wiki (submitted, but not
> > posted yet).[1]
> >
> > The real fun part started after the session. We got together to actually
> > make a plan for an experiment and to give Rust a serious chance. We
> > quickly got a few trivial things working like statically linking Rust
> > into Tor, integrating with the build system to call out to cargo for the
> > Rust build, and using Tor's allocator from Rust.
> >
> > We're planning to write up a blog post summarizing our experiences so
> > far while hopefully poking the Rust developers to prioritize the missing
> > features so we can stop using nightly Rust soon (~months, instead of
> > years).
> >
> > We want to have a patch merged into tor soon so you can all play with
> > your dev setup to help identify any challenges. We want to stress that
> > this is an optional experiment for now, we would love feedback but
> > nobody is paid to work on this and nobody is expected to spend more
> > time than they have sitting around.
> >
> > We have committed to reviewing any patch that includes any Rust code to
> > provide feedback, get experience to develop a style, and actually make
> > use of this experiment. This means we're not ready to take on big
> > patches that add lots of tricky stuff quite now, we want to take it slow
> > and learn from this.
> >
> > We would like to do a session at the next dev meeting to give updates on
> > this effort, but in the meantime, if team members would like to start
> > learning Rust and helping us identify/implement small and well-isolated
> > areas to begin migration, or new pieces of functionality that we can
> > build  immediately in Rust, that would be really great.
> >
> > So, for a TLDR:
> >
> > What has already been done:
> > - Rust in Tor build
> > - Putting together environment setup instructions and a (very small)
> >  initial draft for coding standards
> > - Initial work to identify good candidates for migration (not tightly
> >  interdependent)
> >
> > What we think are next steps:
> > - Define conventions for the API boundary between Rust and C
> > - Add a non-trivial Rust API and deploy with a flag to optionally use
> >  (to test support with a safe fallback)
> > - Learn from similar projects
> > - Add automated tooling for Rust, such as linting and testing
> >
> >
> > Cheers
> > Alex, Chelsea, Sebastian
> >
> > [1]: Will be visible here
> > 

Re: [tor-dev] GSoC 2017 - Feedback Extension for Tor Browser

2017-04-03 Thread Veer Kalantri
Isn't there any difference in the draft and the first application we have
submitted...Plz answer soon as I've to submit mine before 2130 hrs
tonight...

Best,
Veer

On 02-Apr-2017 8:16 PM, "Jayati Dev"  wrote:

> Dear Mentor,
>
>
> Please find my draft application here: https://docs.google.com/
> document/d/1AItxT3k-K1tSHa0OJDl3AC4BL_xYoCUWv6IHDnb4if8/edit?usp=sharing.
> I have also uploaded it through the GSoC 2017 Official Website.
>
>
> Thank you for your time and effort,
>
>
> Sincerely,
>
> Jayati Dev
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposition: Applying an AONT to Prop224 addresses?

2017-04-03 Thread George Kadianakis
Ian Goldberg  writes:

> On Mon, Mar 27, 2017 at 01:59:42AM -0400, Ian Goldberg wrote:
>> > To add an aside from a discussion with Teor: the entire "version" field
>> > could be reduced to a single - probably "zero" - bit, in a manner perhaps
>> > similar to the distinctions between Class-A, Class-B, Class-C... addresses
>> > in old IPv4.
>> > 
>> > Thus: if the first bit in the address is zero, then there is no version,
>> > and we are at version 0 of the format
>> > 
>> > If the first bit is one, we are using v1+ of the format and all bets are
>> > off, except that the obvious thing then to do is count the number of 1-bits
>> > (up to some limit) and declare that to be version number.  Once we're up to
>> > 3 or 4 or 7 or 8 one-bits, then shift version encoding totally.
>> > 
>> > Teor will correct me if I misquote him, but the advantage here was:
>> > 
>> > a) the version number is 1 bit, ie: small, for the forseeable / if we get
>> > it right
>> > 
>> > b) in pursuit of smallness, we could maybe dump the hash in favour of a
>> > AONT + eyeballs, which would give back a bunch of extra bits
>> > 
>> > result: shorter addresses, happier users.
>> 
>> You indeed do not require a checksum under an AONT, but you do require
>> redundancy if you want to catch typos.  Something like
>> 
>> base64( AONT( pubkey || 0x ) || version)
>> 
>> is fine.  If you want "version" to be a single bit, then the AONT would
>> have to operate on non-full bytes, which is a bit (ha!) annoying, but
>> not terrible.  In that case, "0x" would actually be 15 bits of 0,
>> and version would be 1 bit.  This would only save 1.4 base32 characters,
>> though.  If you took off some more bits of the redundancy (down to 8
>> bits?), you would be able to shave one more base32 char.  And indeed, if
>> you make the redunancy just a single byte of 0x00, then the extra 0-bit
>> for the "version" actually fits neatly in the one leftover bit of the
>> base32 encoding, I think, so the AONT is back to working on full bytes.
>> 
>> But is a single byte of redundancy enough?  It will let through one out
>> of every 256 typos.  (I thought we had spec'd 2 bytes for the checkcum
>> now, but maybe I misremember?  I'm also assuming we're using a simple
>> 256-bit encoding of the pubkey, rather than something more complex that
>> saves ~3 bits.)
>> 
>> (Heading to the airport.)
>
> OK, here are the details of this variant of the proposal.  Onion
> addresses are 54 characters in this variant, and the typo-resistance is
> 13 bits (1/8192 typos are not caught).
>
> Encoding:
>
> raw is a 34-byte array.  Put the ed25519 key into raw[0..31] and 0x
> into raw[32..33].  Note that there are really only 13 bits of 0's for
> redundancy, plus the 0 bit for the version, plus 2 unused bits in
> raw[32..33].
>
> Do the AONT.  Here G is a hash function mapping 16-byte inputs to
> 18-byte outputs, and H is a hash function mapping 18-byte inputs to
> 16-byte outputs.  Reasonable implementations would be something like:
>
> G(input) = SHA3-256("Prop224Gv0" || input)[0..17]
> H(input) = SHA3-256("Prop224Hv0" || input)[0..15]
>
> raw[16..33] ^= G(raw[0..15])
> # Clear the last few bits, since we really only want 13 bits of redundancy
> raw[33] &= 0xf8
> raw[0..15] ^= H(raw[16..33])
>
> Then base32-encode raw[0..33].  The 56-character result will always end
> in "a=" (the two unused bits at the end of raw[33]), so just remove that
> part.
>
> Decoding:
>
> Base32-decode the received address into raw[0..33].  Depending on your
> base32 decoder, you may have to stick the "a=" at the end of the address
> first.  The low two bits were unused; be sure the base32 decoder sets
> them to 0.  The next lowest bit (raw[33] & 0x04) is the version bit.
> Ensure that (raw[33] & 0x04 == 0); if not, this is a different address
> format version you don't understand.
>
> Undo the AONT:
>
> raw[0..15] ^= H(raw[16..33])
> raw[16..33] ^= G(raw[0..15])
> # Clear the last few bits, as above
> raw[33] &= 0xf8
>
> Check the redundancy by ensuring that raw[32..33] = 0x.  If not,
> there was a typo in the address.  (Note again that since we explicitly
> cleared the low 3 bits of raw[33], there are really only 13 bits of
> checking here.)
>
> raw[0..31] is then the pubkey suitable for use in Ed25519.  As before
> (and independently of the AONT stuff), you could sanity-check it to make
> sure that (a) it is not the identity element, and (b) L times it *is*
> the identity element.  (L is the order of the Ed25519 group.)  Checking
> (a) is important; checking (b) isn't strictly necessary for the reasons
> given before, but is still a sensible thing to do.  If you don't check
> (b), you actually have to check in (a) that the pubkey isn't one of 8
> bad values, not just the identity.  So just go ahead and check (b) to
> rest easier. ;-)
>
>
> This version contains two calls to SHA3, as opposed to the one such call
> in the non-AONT (but including a checksum) version.  The