Re: [tor-dev] Revisiting prop224 client authorization

2016-10-17 Thread David Goulet
On 17 Oct (13:35:24), George Kadianakis wrote:
> George Kadianakis  writes:
> 
> > [ text/plain ]
> > Hello,
> >
> > we've reached the point in prop224 development where we need to pin down
> > the precise cell formats, so that we can start implementing them. HS
> > client authorization has been one of those areas that are not yet
> > finalized and are still influencing cell format.
> >
> > Here are some topics based on special's old notes, plus some further
> > recent discussion with David and Yawning.
> >
> 
> Hello again,
> 
> I read the feedback on the thread and thought some more about this. Here
> are some thoughts based on received feedback. A torspec branch coming
> soon if people agree with my points below.
> 
> I'd also like to introduce a new topic of discussion here:
> 
> d) Should we introduce the concept of stealth auth again?
> 
>IIUC the current prop224 client auth solutions are not providing all
>the security properties that stealth auth did. Specifically, if Alice
>is an ex-authorized-client of a hidden service and she got revoked,
>she can still fetch the descriptor of a hidden service and hence
>learn the uptime/presense of the HS. IIUC, with stealth auth this was
>not previously possible.

I think this has value if client revocation is a something that actually
happens and the operator wants that revoked client to NEVER know anything
about the service anymore.

My guts tells me that it might be a very small portion of operators that do
that and have concerns on hidding the service. I could be wrong so we can try
to ask around on our public channels and see what's the response.

I can see this feature being added _after_ deployment as well.

> 
> > a) I think the most important problem here is that the authorization-key 
> > logic
> >in the current prop224 is very suboptimal. Specifically, prop224 uses a
> >global authorization-key to ensure that descriptors are only read by
> >authorized clients. However, since that key is global, if we ever want to
> >revoke a single client we need to change the keys for all clients. The
> >current rend-spec.txt does not suffer from this issue, hence I adapted 
> > the
> >current technique to prop224.
> >
> >Please see my torspec branch `prop224_client_auth` for the proposed 
> > changes:
> >
> > https://gitweb.torproject.org/user/asn/torspec.git/log/?h=prop224_client_auth
> >
> >Some further questions here:
> >
> >i) Should we fake the client-auth-desc-key blob in case client 
> > authorization
> >   is not enabled? Otherwise, we leak to the HSDir whether client auth is
> >   enabled. The drawback here is the desc size increase (by about 330 
> > bytes).
> >
> >   Alternatively, we can try to put it in the encrypted part of the
> >   descriptor. So that we require subcredential knowledge to access the
> >   encrypted part, and then client_auth_cookie knowledge to get the
> >   encryption key to decrypt the intro points etc. I feel that this
> >   double-encryption design might be too annoying to implement, but 
> > perhaps
> >   it's worth it?
> >
> 
> Seems like people preferred the double-encryption idea here, so that we
> reveal the least amount of information possible in the plaintext part of
> the desc.
> 
> I think this is a reasonable point since if we put the auth keys in the
> plaintext part of the descriptor, and we always pad (or fake clients) up
> to N authorized clients, it will be obvious to an HSDir if a hidden
> service has more than N authorized clients (since we will need to fake
> 2*N clients then).
> 
> ---
> 
> WRT protocol, I guess the idea here is that if client auth is enabled,
> then we add some client authorization fields in the top of the encrypted
> section of the descriptor, that can be used to find the client-auth
> descriptor encryption key. Then we add another client-auth-encrypted
> blob inside the encrypted part, which contains the intro points etc. and
> is encrypted using the descriptor encryption key found above.

Well, I would only encrypt the key that was used to encrypt the introduction
points but I'm sure this is what you meant!

> 
> So the first layer is encrypted using the onion address, and the second
> layer is encrypted using the client auth descriptor key. This won't be
> too hard to implement, but it's also different from what's currently
> coded in #17238.

Indeed, we need to change stuff but I think it's fine. We can get #17238
merged and then simply apply those changes after. I'm not too concerned about
the engineering logistics personally.

> 
> Do people feel OK with this?
> 
> Also, what should happen if client auth is not used? Should we fall back
> to the current descriptor format, or should we fake authorized clients
> and add a fake client-auth-encrypted-blob for uniformity? Feedback is
> welcome here, and I think the main issue here is engineering time and
> reuse of the 

Re: [tor-dev] Revisiting prop224 client authorization

2016-10-17 Thread George Kadianakis
George Kadianakis  writes:

> [ text/plain ]
> Hello,
>
> we've reached the point in prop224 development where we need to pin down
> the precise cell formats, so that we can start implementing them. HS
> client authorization has been one of those areas that are not yet
> finalized and are still influencing cell format.
>
> Here are some topics based on special's old notes, plus some further
> recent discussion with David and Yawning.
>

Hello again,

I read the feedback on the thread and thought some more about this. Here
are some thoughts based on received feedback. A torspec branch coming
soon if people agree with my points below.

I'd also like to introduce a new topic of discussion here:

d) Should we introduce the concept of stealth auth again?

   IIUC the current prop224 client auth solutions are not providing all
   the security properties that stealth auth did. Specifically, if Alice
   is an ex-authorized-client of a hidden service and she got revoked,
   she can still fetch the descriptor of a hidden service and hence
   learn the uptime/presense of the HS. IIUC, with stealth auth this was
   not previously possible.

> a) I think the most important problem here is that the authorization-key logic
>in the current prop224 is very suboptimal. Specifically, prop224 uses a
>global authorization-key to ensure that descriptors are only read by
>authorized clients. However, since that key is global, if we ever want to
>revoke a single client we need to change the keys for all clients. The
>current rend-spec.txt does not suffer from this issue, hence I adapted the
>current technique to prop224.
>
>Please see my torspec branch `prop224_client_auth` for the proposed 
> changes:
>
> https://gitweb.torproject.org/user/asn/torspec.git/log/?h=prop224_client_auth
>
>Some further questions here:
>
>i) Should we fake the client-auth-desc-key blob in case client 
> authorization
>   is not enabled? Otherwise, we leak to the HSDir whether client auth is
>   enabled. The drawback here is the desc size increase (by about 330 
> bytes).
>
>   Alternatively, we can try to put it in the encrypted part of the
>   descriptor. So that we require subcredential knowledge to access the
>   encrypted part, and then client_auth_cookie knowledge to get the
>   encryption key to decrypt the intro points etc. I feel that this
>   double-encryption design might be too annoying to implement, but perhaps
>   it's worth it?
>

Seems like people preferred the double-encryption idea here, so that we
reveal the least amount of information possible in the plaintext part of
the desc.

I think this is a reasonable point since if we put the auth keys in the
plaintext part of the descriptor, and we always pad (or fake clients) up
to N authorized clients, it will be obvious to an HSDir if a hidden
service has more than N authorized clients (since we will need to fake
2*N clients then).

---

WRT protocol, I guess the idea here is that if client auth is enabled,
then we add some client authorization fields in the top of the encrypted
section of the descriptor, that can be used to find the client-auth
descriptor encryption key. Then we add another client-auth-encrypted
blob inside the encrypted part, which contains the intro points etc. and
is encrypted using the descriptor encryption key found above.

So the first layer is encrypted using the onion address, and the second
layer is encrypted using the client auth descriptor key. This won't be
too hard to implement, but it's also different from what's currently
coded in #17238.

Do people feel OK with this?

Also, what should happen if client auth is not used? Should we fall back
to the current descriptor format, or should we fake authorized clients
and add a fake client-auth-encrypted-blob for uniformity? Feedback is
welcome here, and I think the main issue here is engineering time and
reuse of the current code.

---

Now WRT security, even if we do the double-encryption thing, and we
consider an HSDir adversary that knows the onion address but is not an
authorized client,we still need to add fake clients, otherwise that
adversary will know the exact number of authorized clients. So fake
clients will probably need to be introduced anyhow.

As David pointed out, this all boils down to how much we pad the
encrypted part of the descriptor, otherwise we always leak info. If we
are hoping for a leakless strategy here, we should be generous with our
padding.

Let's see how much padding we need:

- Each intro point adds about 1.1k bytes to the descriptor (according to
  david).

- Each block of 16 authorized clients adds about 1k bytes to the
  descriptor (according to the format described below).

- Apart from intro points and authorized clients, the rest of the
  descriptor is not that heavy: less than 1k bytes (right?)

To get an average size here, let's consider a normal descriptor with 5
intro points 

Re: [tor-dev] strange ARM results

2016-10-17 Thread Damian Johnson
Hi Rob. I suppose it's possible arm is having a refresh issue but
can't say there's a known bug around that. To double check try running
tor-prompt and giving it 'GETINFO circuit-status'...

https://stem.torproject.org/tutorials/down_the_rabbit_hole.html

This is the command arm uses to get the circuit information.

> ARM version 1.4.5 seems to be the latest version. I checked out NYX but
> failed to get it running (Unable to load nyx's internal configurations:
> [Errno 21] Is a directory: '/home/rob/src/nyx/nyx/settings')

Interesting! How did you attempt to run it? Nyx is under active
development so quite possible I buggered something up but can't say
I've seen this one. Please provide the exact commands you ran - for
instance...

% git clone http://dccbbv6cooddgcrq.onion/nyx.git
% cd nyx
% ./run_nyx
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] Further changes to the hybrid handshake proposal (and NTor)

2016-10-17 Thread John M. Schanck
Hi Michael,

Michael Rogers wrote:
> If we're concerned with the server choosing its public material in such
> a way as to bias the entropy extraction, does that mean that in this
> case, the attacker is the server, and therefore the server's public
> material shouldn't be included in the salt?

In a one-way authenticated key exchange we only need to consider
adversaries that attempt to impersonate the server. So, yes, we're
considering the case where the attacker plays the server role and
we're saying that unauthenticated material from the server should
not be included in the salt.

Previous versions of prop269 included the server ephemeral shares
in the salt, we've removed those in this version.

The remaining values in the salt are:
- the server's identity digest,
- the server's onion key, and
- ephemeral shares from the client.

All of these values are authentic from the client's perspective.

Since we're not including the server shares in the salt, we also
had to switch from sending 'auth' to sending HMAC(auth, transcript)
in the server response.

Cheers,
John

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] strange ARM results

2016-10-17 Thread Rob van der Hoeven
On Mon, 2016-10-17 at 22:30 +1100, teor wrote:
> > On 17 Oct 2016, at 22:04, Rob van der Hoeven  
> > wrote:
> > 
> > Hi folks,
> > 
> > I'm on a quest to find the average circuit-creation rate of clients. I
> > looked in path-spec.txt to find an answer, but it wasn't there. So I
> > thought: lets take some measurements using ARM. This got me some strange
> > results. I start ARM, do some browsing, and close my browser. During the
> > browsing ARM reports 3 circuits with ID's: 398, 399 and 400. These
> > circuits are still there 45 minutes after the browser was closed. If I
> > then restart ARM it reports that there are only two circuits, with
> > circuit ID's 405 and 406. It looks to me that ARM does not update the
> > circuits page when old circuits are closed and replaced by new circuits.
> > It's also possible that ARM keeps old circuits alive after they are not
> > being used anymore by my Tor proxy.
> 
> It seems more likely that this is a refresh issue, either in arm or in
> the tor event code.
> 
> Can you replicate it with the latest stable versions of tor and arm?
> 
> > Note: I use ARM version: 1.4.5.0, Tor version: 0.2.5.12
> 
> That's a very old version of tor. I wouldn't use it to measure anything, we've
> made significant improvements in the last few years.

ARM version 1.4.5 seems to be the latest version. I checked out NYX but
failed to get it running (Unable to load nyx's internal configurations:
[Errno 21] Is a directory: '/home/rob/src/nyx/nyx/settings')

Did test with Tor version 0.2.9.3-alpha-dev. Same problem

Regards,
Rob.
https://hoevenstein.nl


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] strange ARM results

2016-10-17 Thread teor

> On 17 Oct 2016, at 22:04, Rob van der Hoeven  wrote:
> 
> Hi folks,
> 
> I'm on a quest to find the average circuit-creation rate of clients. I
> looked in path-spec.txt to find an answer, but it wasn't there. So I
> thought: lets take some measurements using ARM. This got me some strange
> results. I start ARM, do some browsing, and close my browser. During the
> browsing ARM reports 3 circuits with ID's: 398, 399 and 400. These
> circuits are still there 45 minutes after the browser was closed. If I
> then restart ARM it reports that there are only two circuits, with
> circuit ID's 405 and 406. It looks to me that ARM does not update the
> circuits page when old circuits are closed and replaced by new circuits.
> It's also possible that ARM keeps old circuits alive after they are not
> being used anymore by my Tor proxy.

It seems more likely that this is a refresh issue, either in arm or in
the tor event code.

Can you replicate it with the latest stable versions of tor and arm?

> Note: I use ARM version: 1.4.5.0, Tor version: 0.2.5.12

That's a very old version of tor. I wouldn't use it to measure anything, we've
made significant improvements in the last few years.

T

> 
> Regards,
> Rob.
> https://hoevenstein.nl
> 
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

T

--
Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
--









signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Relays on Whonix Gateway

2016-10-17 Thread teor

> On 17 Oct 2016, at 19:48, juanjo  wrote:
> 
> Interesting... I thought that a Tor client running a relay would actually 
> help its privacy because you can't tell if its a client connection or relay 
> connection…

It depends what sort of privacy you're after.
It provides a certain level of traffic hiding, but it makes the IP address and
uptime/downtime/latency/weird pauses public. We don't recommend it.

T

> 
> El 17/10/2016 a las 3:04, teor escribió:
>>> On 7 Oct 2016, at 08:11, ban...@openmailbox.org
>>>  wrote:
>>> 
>>> Should Whonix document/encourage end users to turn clients into relays on 
>>> their machines?
>>> 
>> Probably not:
>> * it increases the attack surface,
>> * it makes their IP address public,
>> * the relays would be of variable quality.
>> 
>> Why not encourage them to run bridge relays instead, if their connection is
>> fast enough?
>> 
>> T
>> 
>> --
>> Tim Wilson-Brown (teor)
>> 
>> teor2345 at gmail dot com
>> PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
>> ricochet:ekmygaiu4rzgsk6n
>> xmpp: teor at torproject dot org
>> --
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> ___
>> tor-dev mailing list
>> 
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> 
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

T

--
Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
--









signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] strange ARM results

2016-10-17 Thread Rob van der Hoeven
Hi folks,

I'm on a quest to find the average circuit-creation rate of clients. I
looked in path-spec.txt to find an answer, but it wasn't there. So I
thought: lets take some measurements using ARM. This got me some strange
results. I start ARM, do some browsing, and close my browser. During the
browsing ARM reports 3 circuits with ID's: 398, 399 and 400. These
circuits are still there 45 minutes after the browser was closed. If I
then restart ARM it reports that there are only two circuits, with
circuit ID's 405 and 406. It looks to me that ARM does not update the
circuits page when old circuits are closed and replaced by new circuits.
It's also possible that ARM keeps old circuits alive after they are not
being used anymore by my Tor proxy.

Note: I use ARM version: 1.4.5.0, Tor version: 0.2.5.12 

Regards,
Rob.
https://hoevenstein.nl


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] Further changes to the hybrid handshake proposal (and NTor)

2016-10-17 Thread Michael Rogers
On 14/10/16 22:45, isis agora lovecruft wrote:
>  1. [NTOR] Inputs to HKDF-extract(SALT, SECRET) which are not secret
> (e.g. server identity ID, and public keys A, X, Y) are now removed from
> SECRET and instead placed in the SALT.
> 
> Reasoning: *Only* secret data should be placed into the HKDF extractor,
> and public data should not be mixed into whatever entropic material is
> used for key generation.  This eliminates a theoretical attack in which
> the server chooses its public material in such a way as to bias the
> entropy extraction.  This isn't reasonably assumed to be possible in a
> "hash functions aren't probablistically pineapple slicers" world.
> 
> Previously, and also in NTor, we were adding the transcript of the
> handshake(s) and other public material (e.g. ID, A, X, Y, PROTOID)
> directly into the secret portion of an HMAC call, the output of which is
> eventually used to derive the key material.  The SALT for HKDF (as
> specified in RFC5869) can be anything, even a static string, but if we're
> going to be adding transcript material into the handshake, it shouldn't be
> in the entropy extraction phrase.

Hi Isis,

Sorry if this is a really stupid question, but there's something I've
never fully understood about how RFC 5869 describes the requirements for
the HKDF salt. Section 3.4 says:

   While there is no need to keep the salt secret, and the
   same salt value can be used with multiple IKM values, it is assumed
   that salt values are independent of the input keying material.  In
   particular, an application needs to make sure that salt values are
   not chosen or manipulated by an attacker.  As an example, consider
   the case (as in IKE) where the salt is derived from nonces supplied
   by the parties in a key exchange protocol.  Before the protocol can
   use such salt to derive keys, it needs to make sure that these nonces
   are authenticated as coming from the legitimate parties rather than
   selected by the attacker (in IKE, for example this authentication is
   an integral part of the authenticated Diffie-Hellman exchange).

As far as I can tell, the assumption in this example is that the
attacker is not the other party in the key exchange - otherwise
authenticating the nonces wouldn't tell us whether they were safe to
include in the salt.

If we're concerned with the server choosing its public material in such
a way as to bias the entropy extraction, does that mean that in this
case, the attacker is the server, and therefore the server's public
material shouldn't be included in the salt?

Again, probably just a failure on my part to understand the context, but
I thought I should ask just in case.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Relays on Whonix Gateway

2016-10-17 Thread juanjo
Interesting... I thought that a Tor client running a relay would 
actually help its privacy because you can't tell if its a client 
connection or relay connection...



El 17/10/2016 a las 3:04, teor escribió:

On 7 Oct 2016, at 08:11, ban...@openmailbox.org wrote:

Should Whonix document/encourage end users to turn clients into relays on their 
machines?

Probably not:
* it increases the attack surface,
* it makes their IP address public,
* the relays would be of variable quality.

Why not encourage them to run bridge relays instead, if their connection is
fast enough?

T

--
Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
--









___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev



___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Relays on Whonix Gateway

2016-10-17 Thread isis agora lovecruft
ban...@openmailbox.org transcribed 1.7K bytes:
> On 2016-10-17 03:04, teor wrote:
> >>On 7 Oct 2016, at 08:11, ban...@openmailbox.org wrote:
> >>
> >>Should Whonix document/encourage end users to turn clients into relays
> >>on their machines?
> >
> >Probably not:
> >* it increases the attack surface,
> >* it makes their IP address public,
> >* the relays would be of variable quality.
> >
> >Why not encourage them to run bridge relays instead, if their connection
> >is
> >fast enough?
> 
> Good idea. We are waiting for snowflake bridge transport to be ready and we
> plan to enable it by default on Whonix Gateway. Its optimal because no port
> forwarding is needed or changes to firewall settings (because VMs connect
> from behind virtual NATs).

You're planning to enable "ServerTransportPlugin snowflake" on Whonix Gateways
by default?  And then "ClientTransportPluging snowflake" on workstations
behind the gateway?

-- 
 ♥Ⓐ isis agora lovecruft
_
OpenPGP: 4096R/0A6A58A14B5946ABDE18E207A3ADB67A2CDB8B35
Current Keys: https://fyb.patternsinthevoid.net/isis.txt


signature.asc
Description: Digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] Further changes to the hybrid handshake proposal (and NTor)

2016-10-17 Thread Trevor Perrin
On Fri, Oct 14, 2016 at 2:45 PM, isis agora lovecruft
 wrote:
>
> After discussion with John Schanck and Trevor Perrin over the last month,
> we've decided to make some alterations to the specification for hybrid
> handshakes in Tor proposal #269.
>
> It seems that John, Trevor, and I are mostly in agreement about most
> of the construction.

Hi Isis, all,

My main suggestion was to take a look at Noise:

https://noiseprotocol.org

Noise is a framework for DH-based (Ntor-like) key exchange protocols.
You choose a "handshake pattern" plus your favorite crypto and it
fills in the details.  So this would save you from hand-crafting your
key derivation and transcript hashing, as Noise specifies this (e.g.
it uses a chain of HKDF for key derivation, similar to Signal, IPsec,
or TLS 1.3).

For Ntor + hybrid forward secrecy, you could choose something like:

  Noise_NKhfs_25519+NewHope_ChaChaPoly_SHA256
or:
  Noise_NKhfs_25519+NTRU_AESGCM_BLAKE2b
  etc.

The names are a mouthful, but specify the whole protocol:

  NKhfs is a handshake pattern
NK = (N)o client long-term key, (K)nown server long-term key
hfs = hybrid forward secrecy
  25519+NewHope = public-key algorithms
  ChaChaPoly = ChaCha20/Poly1305 for AEAD
  SHA256 = hash for transcript hashing and HKDF


Some other benefits:

 * There are C and Java libraries that can implement this (with
NewHope) by Rhys Weatherley, and hopefully more will pop up.

 * Saves design effort, because it's easy to change patterns to add
client auth, or pre-shared keys, or certificates; or swap out crypto).

 * Also used by WhatsApp and WireGuard, so hopefully the libraries,
tools, and design will continue to improve, benefiting other users.

Noise can be hard to figure out because it's a toolkit, not a single
protocol, but I'd be happy to answer questions about particular use
cases.

Of course, I also think Tor's existing Ntor, the current Tor
proposals, and the changes Isis is mentioning, all seem fine.

Trevor
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev