[tor-dev] Privacy Pass

2017-11-23 Thread bancfc
Hi. Are there any plans to include Privacy Pass addon in Tor Browser by 
default? Privacy Pass is the result of some great work by Ian and his team at 
University of Waterloo to spare Tor users the torture of solving infinite 
captchas from Cloudflare.[0][1]

[0] https://privacypass.github.io/team/
[1] https://blog.cloudflare.com/privacy-pass-the-math/
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] PQ crypto updates

2017-08-18 Thread bancfc
If I understand correctly, DJB describes how NTRU-Prime is more robust against 
certain attack classes that Ring-LWE is more prone to:

https://twitter.com/hashbreaker/status/880086983057526784

***

About two months later DJB releases a streamlined version of NTRU-Prime that is 
faster, safer and uses less resources than the latest version of New Hope while 
(wait for it...) completely eliminating decryption failures !:

https://twitter.com/hashbreaker/status/898048057849380864
https://twitter.com/hashbreaker/status/898048506681860096
https://twitter.com/hashbreaker/status/898048760009420801
https://twitter.com/hashbreaker/status/898391210456489984


***

Boom headshot! AEZ is dead in the water post quantum:

Paper name: Quantum Key-Recovery on full AEZ

https://eprint.iacr.org/2017/767.pdf
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Extending Tor stats to cover anon OSs?

2017-06-18 Thread bancfc

@TPO devs

Since you do a great job safely collecting useful stats on the network, 
would you be open to adding a self-identifying anonOS distro option to 
the protocol? Would this be OK or is it mission creep? On the flip side 
it would be much more accurate than anything we can do to estimate 
active user base.


Some ideas:

*The distro name options could either be hard-coded values (for example 
TAILS, Whonix...) or a custom one chosen by downstream maintainers.


*Distro maintainers would enable it for Tor clients via torrc.d
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Further New Hope Improvements

2017-05-23 Thread bancfc
New paper released a week ago makes further improvements on New Hope, 
reducing decryption failure rates, ciphertext size and amount of entropy 
needed. This new version will be submitted as a NIST PQ competition 
candidate.


https://eprint.iacr.org/2017/424
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] GNU Guix and Tor Browser Packaging

2017-03-13 Thread bancfc
There is a serious Tor Browser packaging effort [3][4] being done by ng0 
(GNUnet dev) for the GNU Guix [0] package manager. GNU Guix supports 
transactional upgrades and roll-backs, unprivileged package management, 
per-user profiles and most importantly reproducible builds. I have 
checked with Guix's upstream and they are working on making a binary 
mirror available over a Tor Hidden Service. [2] Also planned is 
resilience [2] to the attack outlined in the TUF threat model. [1]


Back to the topic of Tor Browser packaging. While there are good reasons 
for Debian's pakaging policies they make packaging of fast evolving 
software (and especially with TBB's reliance on a opaque binary VM for 
builds) impractial. Both we and Micah have been doing a good effort to 
automate downloading and validating TBB but I still believe its a 
maintenance burden and Guix may be a way out of that for Linux distros 
in general.


What are your thoughts on this?





***

[0] https://www.gnu.org/software/guix/
[1] https://github.com/theupdateframework/tuf/blob/develop/SECURITY.md
[2] https://lists.gnu.org/archive/html/guix-devel/2017-03/msg00192.html
[3] https://lists.gnu.org/archive/html/guix-devel/2017-03/msg00189.html
[4] https://lists.gnu.org/archive/html/guix-devel/2017-03/msg00149.html
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Browser and Mozilla addon verification

2017-02-17 Thread bancfc

On 2017-02-18 01:29, teor wrote:

Future questions about Tor Browser would best be directed to:
tbb-...@torproject.org

If you post this question to tbb-dev, please let this list know to
direct responses there.

T



My bad. I reposted my question there at: 
https://lists.torproject.org/pipermail/tbb-dev/2017-February/000464.html


Please direct any answers there.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Tor Browser and Mozilla addon verification

2017-02-17 Thread bancfc
Hi, does Tor Browser check addon code for tampering for addons from the 
Mozilla server?


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] SipHash Impact on TCP ISN skew fingerprinting

2017-01-11 Thread bancfc
SipHash a fast PRF by DJB has been adopted upstream across the Linux 
networking stack landing in 4.11. It deprecates a lot of ancient and 
broken crypto like MD5 for initial sequence number hashes.


Its my guess that that timer values added in ISNs should now be 
indistinguishable from the rest of the hashed secret outlined in 
RFC-6528.[1] Can anyone knowledgeable in reading kernel code [2] please 
confirm that this kills clock skew extraction [3] and fingerprinting [4] 
described in Steven Murdoch's papers?


Its one of the advanced attacks we've been following for some time now 
and would be good to write it off.


***

[1] https://tools.ietf.org/html/rfc6528

[2] http://lkml.iu.edu/hypermail/linux/kernel/1701.1/00076.html

[3] http://sec.cs.ucl.ac.uk/users/smurdoch/papers/ih05coverttcp.pdf 
(pages 7-8)


[4] http://sec.cs.ucl.ac.uk/users/smurdoch/papers/ccs06hotornot.pdf
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] TBB Isolation Impact on Alternative Anon Nets

2016-12-05 Thread bancfc
TBB sandboxing is a great hardening measure. I was wondering if there 
are side-effects such as breaking setups that involve using anonymous 
networks other than Tor. Such as: 
https://thetinhat.com/tutorials/darknets/i2p-browser-setup-guide.html


As a workaround we can document how to toggle the TBB variable to 
disable this. Of course the best solution is having the isolation 
compatible with alternative setups if you consider this (minority) 
use-case worthy of your effort.

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Hidden Services and identity-based encryption (IBE)

2016-12-03 Thread bancfc
Read the Alpenhorn paper. Really neat stuff. It is able to guarantee 
forward-secrecy for identities and metadata and doesn't need out-of-band 
identity sharing. Can any of this stuff be borrowed for HSs?


https://vuvuzela.io/alpenhorn-extended.pdf

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Shor's Algorithm meets Lattices

2016-11-26 Thread bancfc
In a new paper Peter Shor extends his quantum algorithm to solving a 
variant of the Closest Lattice-Vector Problem in polynomial time. With 
some future tweaking it can be used against the entire family of Lattice 
based crypto.


While an error in the calculations has been pointed out and the paper 
will be withdrawn, this isn't reassuring since a revised version where 
this still holds is probable.


Its available on arxiv until Monday so grab a copy before then:

https://arxiv.org/pdf/1611.06999.pdf


Without Lattice crypto we're stuck with some very ugly choices as Isis 
pointed out. McEliece is huge. SIDH is slow and brittle. The PQ future 
looks grim fam :(

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Browsers, VMs and Targeted Hardware Bit-Flips

2016-11-18 Thread bancfc

On 2016-11-18 00:03, teor wrote:

Hi all,

There have been a series of recent attacks that take advantage of
"rowhammer" (a RAM hardware bit-flipping vulnerability) to flip bits in
security-critical data structures.

VMs sharing the same physical RAM are vulnerable, and browsers and
mobile apps are remote vectors with proof-of-concept implementations.

Rowhammer summary:
https://en.wikipedia.org/wiki/Row_hammer

An attack that flips targeted bits in another virtual machine on the
same physical RAM, targeting OpenSSH public keys, GPG public keys, and
Debian package sources:
https://www.vusec.net/projects/flip-feng-shui/

A similar proof-of-concept Android app:
https://www.vusec.net/projects/drammer/

A JavaScript-based in-browser remote proof-of-concept:
https://arxiv.org/pdf/1507.06955v1.pdf

It seems like a short step from these existing attacks to targeting
Tor Browser users remotely. I wonder whether it might be possible to
target relays (or clients) using OR cells or directory documents with
specific content, but this seems much less likely.

I have been thinking about how we could make Tor (and browsers, and
other processes, and OSs) less vulnerable to these kinds of attacks.

In general, some of the process-level defences against one or more of
the above attacks are:
* sign or checksum all security-critical data structures,
* implement and check cross-certification,
* don't rely on cached checks (or checks performed at load time)
  continuing to be accurate,
* minimise time between checking validity and using the data,
  (this includes signatures, checksums, data structure consistency)
* make the content of memory pages (including loaded files) less
  predictable,
* make sure the hamming distance between trusted, valid inputs and
  untrusted, valid inputs is large, in particular:
  * register domains that are one bit-flip away from trusted domains,
   (or, alternately, mandate SSL, and pin certificates, and fix broken
CA roots)

Some of the OS-level defences are:
* turn off memory deduplication,
* write and verify checksums on each page,

Some of the firmware/hardware defences are:
* increase the RAM refresh rate,
* improve RAM design,
* use ECC RAM.

T




We've been keeping track of these attack classes on hypervisors:

https://www.whonix.org/wiki/Advanced_Attacks
https://www.whonix.org/wiki/KVM#Unsafe_Features

Its great to design software that's resistant to adversarial conditions 
anyhow.


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Distributed RNG Research

2016-11-18 Thread bancfc
New research on Distributed RNGs is published: "Scalable Bias-Resistant 
Distributed Randomness"


eprint.iacr.org/2016/1067
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Different trust levels using single client instance

2016-10-21 Thread bancfc

Summarized question:

Do you recommend allowing Workstation VMs of different security levels 
to communicate with the same Tor instance? Note that they connect via 
separate internal networks to the Gateway and have different interfaces 
& controlports so inter-workstation communication should not be 
possible.



Single Tor Gateway, Multiple Workstations

Pros:
*Same guard node means less chance of picking a malicious one
*Single Gateway VM uses less resources

Cons:
*Some unforeseen way malicious VM "X" can link activities of or 
influence traffic of VM "Y"
**Maybe sending NEWNYM requests in a timed pattern that changes exit IPs 
of VM Y's traffic, revealing they are behind the same client?

**Maybe eavesdropping on HSes running on VM Y's behalf?
**Something else we are not aware of?


Multi-Tor Gateways mapped 1:1 to Workstation VMs

Pros:
*Conceptually simple. Uses a different Tor instance so no need to worry 
about all these questions.


Cons:
*Uses a different entry guard which can increase chance of running into 
a malicious relay that can deanonymize some of the traffic.
* Uses extra resources (though not much as a Tor Gateway can run with as 
little as 192MB RAM)

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Relays on Whonix Gateway

2016-10-19 Thread bancfc

On 2016-10-17 10:24, isis agora lovecruft wrote:

ban...@openmailbox.org transcribed 1.7K bytes:

On 2016-10-17 03:04, teor wrote:
>>On 7 Oct 2016, at 08:11, ban...@openmailbox.org wrote:
>>
>>Should Whonix document/encourage end users to turn clients into relays
>>on their machines?
>
>Probably not:
>* it increases the attack surface,
>* it makes their IP address public,
>* the relays would be of variable quality.
>
>Why not encourage them to run bridge relays instead, if their connection
>is
>fast enough?

Good idea. We are waiting for snowflake bridge transport to be ready 
and we
plan to enable it by default on Whonix Gateway. Its optimal because no 
port
forwarding is needed or changes to firewall settings (because VMs 
connect

from behind virtual NATs).


You're planning to enable "ServerTransportPlugin snowflake" on Whonix 
Gateways
by default?  And then "ClientTransportPluging snowflake" on 
workstations

behind the gateway?




I was planning to enable the server by default (I thought WebRTC was P2P 
though) but after looking at it some more I don't think it's a good 
idea.


Not everyone is in a position to run a bridge because they may be living 
in a censored area themselves. It might also make Whonix users stand out 
if it was a default. Also Snowflake servers may actully be exposing 
themselves to privacy risks which is not something we are prepared to 
do:


"A popular privacy measure advocated to certain classes of users (eg: 
those that use VPN systems) has been to disable WebRTC due to the 
potential privacy impact. While this is not a concern for Tor Browser 
users using snowflake as a transport, there is a segment of people that 
view WebRTC as harmful to anonymity, and the volunteers that are 
contributing bandwidth are exposed to such risks. "


https://trac.torproject.org/projects/tor/wiki/doc/PluggableTransports/SnowFlakeEvaluation

***

Offtopic: I think a pluggable transport thats implemented with 
bittorrent would be awesome because of how widespread the protocol is 
and because of the existing infrastructure out there that users can 
potentially bootstrap off of if seed servers volunteer to run a bridge 
sever/facilitator.

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor Relays on Whonix Gateway

2016-10-16 Thread bancfc

On 2016-10-17 03:04, teor wrote:

On 7 Oct 2016, at 08:11, ban...@openmailbox.org wrote:

Should Whonix document/encourage end users to turn clients into relays 
on their machines?


Probably not:
* it increases the attack surface,
* it makes their IP address public,
* the relays would be of variable quality.

Why not encourage them to run bridge relays instead, if their 
connection is

fast enough?


Good idea. We are waiting for snowflake bridge transport to be ready and 
we plan to enable it by default on Whonix Gateway. Its optimal because 
no port forwarding is needed or changes to firewall settings (because 
VMs connect from behind virtual NATs).




T

--
Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
--








___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Tor Relays on Whonix Gateway

2016-10-06 Thread bancfc
Should Whonix document/encourage end users to turn clients into relays 
on their machines?


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] archive.is alternative for CFC addon

2016-10-01 Thread bancfc
Since there were plans to use this service to circumvent Cloudflare 
CAPTCHAs and now its behind Cloudflare itself (it requires users to 
execute JS to access content) what alternative is planned for the 
upcoming CFC addon?



***

PS. My username was around long before this addon and is not related to 
it in any way.

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Constraining Ephemeral Service Creation in Tor

2016-09-29 Thread bancfc

On 2016-09-29 08:38, teor wrote:

On 28 Sep 2016, at 07:59, ban...@openmailbox.org wrote:

Hello, We are working on supporting ephemeral onion services in Whonix 
and one of the concerns brought up is how an attacker can potentially 
exhaust resources like RAM. CPU, entropy... on the Gateway (or system 
in the case of TAILS) by requesting an arbitrary number of services 
and ports to be created.


In our opinion, options in core Tor for setting a maximum number of 
services and ports per service seems the right way to go about it. 
Also rate limiting the requests (like you do with NEWNYM) would be a 
sensible thing to do.


What are your opinions about this?


I think this would be much better implemented in a control port filter.
There are several existing control port filters.
Do they have this feature?


None of them do.



Alternately, you should limit resources to the tor process using OS
facilities. If you set an open file limit, this will constrain the
number of hidden services.
If it doesn't, or tor behaves badly when adding a hidden service with
few file descriptors, file a bug against tor.


Thanks for the tip.



T

--
Tim Wilson-Brown (teor)

teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org








___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] prop224: Ditching key blinding for shorter onion addresses

2016-07-29 Thread bancfc

On 2016-07-29 17:26, George Kadianakis wrote:

Hello people,

this is an experimental mail meant to address legitimate usability 
concerns
with the size of onion addresses after proposal 224 gets implemented. 
It's

meant for discussion and it's far from a full blown proposal.

Anyway, after prop224 gets implemented, we will go from 16-character 
onion

addresses to 52-character onion addresses. See here for more details:

https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt#n395

This happens because we want the onion address to be a real public key, 
and not
the truncated hash of a public key as it is now. We want that so that 
we can do
fun cryptography with that public key. Specifically, we want to do key 
blinding

as specified here:

https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.txt#n1692

As I understand it the key blinding scheme is trying to achieve the
following properties:
a) Every HS has a permanent identity onion address
b) Clients use an ephemeral address to fetch descriptors from HSDir
c) Knowing the ephemeral address never reveals the permanent onion 
address

c) Descriptors are encrypted and can only be read by clients that know
the identity onion key
d) Descriptors are signed and verifiable by clients who know the
identity onion key
e) Descriptors are also verifiable in a weaker manner by HSDirs who
know the ephemeral address

In this email I'm going to sketch a scheme that has all above
properties except from (e).

The suggested scheme is basically the current HSDir protocol, but with 
clients
using ephemeral addresses for fetching HS descriptors. Also, we 
truncate onion

address hashes to something larger than 80bits.

Here is a sketch of the scheme:

--

Hidden service Alice has a long-term public identity key: A
Hidden service Alice has a long-term private identity key: a

The onion address of Alice, as in the current scheme, is a truncated 
H(A).

So let's say: onion_address = H(A) truncated to 128 bits.

The full public key A is contained in Alice's descriptor as it's
currently the case.

When Alice wants to publish a descriptor she computes an ephemeral 
address
based on the current time period 't': ephemeral_address = H(t || 
onion_address)


Legitimate clients who want to fetch the descriptor also do the same, 
since

they know both 't' and 'onion_address'.

Descriptors are encrypted using a key derived from the onion_address. 
Hence,

only clients that know the onion_address can decrypt it.

Descriptors are signed using the long-term private key of the hidden 
service,

and can be verified by clients who manage to decrypt the descriptor.

---

Assuming the above is correct and makes sense (need more brain), it 
should

maintain all the security properties above except from (e).

So basically in this scheme, HSDirs won't be able to verify the 
signatures of

received descriptors.

The obvious question here is, is this a problem?

IIUC, having the HSDirs verify those signatures does not offer any 
additional
security, except from making sure that the descriptor signature was 
actually

created using a legitimate ed25519 key. Other than that, I don't see it
offering much.

So, what does this additional HSDir verification offer? It seems like a 
weak
way to ensure that no garbage is uploaded on the HSDir hash ring. 
However, any
reasonable attacker will put their garbage in a descriptor and sign it 
with a

random ed25519 key, and it will trivially pass the HSDir validation.

So do we actually care about this property enough to introduce huge 
onion

addresses to the system?

Please discuss and poke holes at the above system.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev



Speaking out of turn here:

Why not integrate kernelcorn's OnioNS project and keep all the current 
security properties?


OnioNS addresses are much more user friendly than even the shorter 
.onion addresses.

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor with collective signatures

2016-07-22 Thread bancfc

On 2016-07-21 17:05, isis agora lovecruft wrote:

Nicolas Gailly transcribed 59K bytes:

Hi,

Here's a new version of the proposal with some minor fixes discussed
with teor last time.

0.4:
- changed *included* to *appended*
- 3.2: end of paragraph, a valid consensus document contains a 
majority

   of CoSi signatures.
- Acknowledgments include teor and Tom Ritter.

As always, critics / feedbacks / thoughts are more than welcome :)

Thanks !

Nicolas

Ps: Our team and I are going to be at PETS this year, so if you don't
have time now to

read the whole thing, but you are still willing to know about CoSi and
how it could improve

Tor security, I/we will be happy to talk with some of you there also.


Hello all,

At PETS this afternoon, Nicolas Gailly, Philipp Jovanovic, Ismail 
Khoffi,
Georg Koppen, Nick Mathewson, and I met to discuss the collective 
signing

proposal.  I'm just going to breifly summarise the discussion here.

One of the major concerns voiced was that, if we made it mandatory that 
a

collective signature on a consensus be verifiable (for some N number of
signers, where N might be all of them but it's not important) for a 
client to
accept and use a consensus, then attacks upon the witnesses (or any 
disruption
to the witness signing system) will cause clients to no longer be able 
to
bootstrap.  Conversely, if we made it so that it only emitted some 
warning
when the collective signature could not be verified, then (likely) no 
users
would see this warning (or even if they did, they'd treat it in the 
same

manner as a TLS certificate warning and simply click through it).

There is also concern that, with enforcing collective signatures, that 
the Tor
network has a larger attack surface w.r.t. (D)DoSing: an adversary 
could DoS 5
of the 9 DirAuths *or* they could DoS whatever necessary percentage of 
the
witness servers.  Additionally, an adversary who controls some portion 
of the
witness servers may DoS other witnesses in order to amplify the 
relative

proportion of the collective signature which they control.

There was some discussion over whether to integrate this into core tor, 
or
rather to just use Nicolas' CoSi Golang tool in a separate process.  
Everyone

agreed that rewriting something from Go to C is suboptimal.

One idea was if we used CoSi, but rather than "don't trust/use a 
consensus if
it doesn't have a good CoSi" we could use it as a mechanism for clients 
to
report (to some system somewhere? perhaps part of the prop#267 
consensus

transparency logs?) when CoSis don't verify successfully.


+1 this write-up. Besides sending stats, I still think its useful for 
the CoSi Go client to locally log its results so advanced users (who 
don't treat warnings like browser SSL errors) can take the necessary 
steps if they see they are attacked.




Another idea was to use CoSi to sign the metadata file which Firefox's 
updater
uses to learn where to fetch updates so that a client would know that 
the same
Tor Browser updates were being served to other different vantage 
points.


Todo list:

 1. It's not super necessary, but more analysis of the bandwidth 
overhead for
running this protocol would be nice, i.e. network-wide overhead, 
not just

the overhead for a single witness.

 2. It would be nice to have some RFC-like description so that 
alternate
implementations could be created, e.g. include encodings, state 
machines,
message formats.  (We strive to maintain our specifications with 
the
delusion that there are secretly hundreds of other tor 
implementations in
every existing language, and that any of them should be compatible 
if they

follow the specification.)

 3. Update the proposal to mention that each DirAuth would have their 
own
tree, thus the consensus document in the end would have somewhere 
between

5 and 9 CoSi signatures.

 4. There's a typo in §5.2: s/witnesse's/witnesses'/

Thanks, everyone, for the great discussion!

Best regards,
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Using Tor Stealth HS with a home automation server

2016-07-08 Thread bancfc

On 2016-07-08 18:53, Nathan Freitas wrote:

I've been working on some ideas about using Tor to secure "internet of
things", smart devices other than phones, and other home / industrial
automation infrastructure. Specifically, I think this could be a huge
application for Tor Hidden Services and Onion sites configured with
Hidden Service Authentication and "stealth" mode.

Earlier this year, I published some ideas on the subject here
https://github.com/n8fr8/talks/blob/master/onion_things/Internet%20of%20Onion%20Things.pdf
showing how you could use Orbot and IP Camera apps to build a 
cloud-free

Tor-secured "Dropcam" style setup.



Nice! An interesting Orbot feature to have is making QR Codes of 
authenticated Hidden Service info so mobile devices can easily add each 
other to a trusted network.



___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Comments on Yawning's Draft proposal for Debian

2016-06-12 Thread bancfc
I thought the proposal [1] is well written but there is one major point 
it should include:


Sometimes apt/dpkg can contain remotely exploitable bugs which s a big 
risk when updates are fetched over HTTP. As it happens, anyone could 
have been in a position to poison the update process and take over the 
machine because of [CVE-2014-6273] in apt-get [2]. What makes
this bug crippling is that updating apt to fix it would have exposed it 
to what the fix was supposed to prevent. The safest option this time was 
to manually download the fixed package out of band. Updating from an 
Onion Service would protect systems from any tampering/attacks at the 
Exits while bringing all the usual benefits of package metadata privacy.




***

While there's been some progress to setup Debian APT Onion Services 
[3][4], its still a long way away from being enabled as a safe default. 
This problem along many others summarized in the Debian wiki [5] (such 
as upstream patching of chatty apps that leak system information like 
pip [6]) would make great talking points at the next DebConf.




[1] https://yawnbox.com/index.php/2016/05/03/draft-proposal-for-debian/
[2] http://security-tracker.debian.org/tracker/CVE-2014-6273
[3] 
http://richardhartmann.de/blog/posts/2015/08/24-Tor-enabled_Debian_mirror/
[4] 
http://richardhartmann.de/blog/posts/2015/08/25-Tor-enabled_Debian_mirror_part_2/

[5] https://wiki.debian.org/TorifyDebianServices
[6] https://lists.debian.org/debian-security/2016/05/msg00059.html


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] TUF Repository for Tor Browser

2016-06-11 Thread bancfc

On 2016-06-10 18:27, Lunar wrote:

ban...@openmailbox.org:

Rehash of previous discussions on the topic:


See #3994.


The major reasons why TBB is not in the Debian repository:

* The reproducible build system depends on a static binary image of 
(then

Ubuntu) which runs counter to Debian policy.


It's likely not a problem if built from source.

* TBB is based on Firefox ESR and not Iceweasel which also runs into 
the "no

duplicate source  package" policy of Debian.


I've discussed this with Debian security team a while ago and they are
ok with duplicate source code as long as the updates are done in a
timely manner. Tor Browser has a good record, so it's fine.

Reasons for unavailability of TBB .deb in the Tor Project APT 
repository:


* The break neck speed of development


A regular build could probably be automated via Jenkins.

* Its not easily packaged and the amount of effort needed is better 
spent

otherwise.


As far as I understand, the main issue is that Tor Browser only works
with a single (pre-populated) profile which can't be shared amongst
multiple users. Once this is solved, and Tor Browser can be installed
system-wide, getting a package should not be very hard.

Hope that helps,


Thanks Lunar for the update. I thought the effort to upstream TBB had 
completely stalled because there was no activity on #3994. Good to know 
its still alive.


Is there somewhere I could look to track progress besides that ticket?
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] TUF Repository for Tor Browser

2016-06-10 Thread bancfc
In light of the technical obstacles that prevent packaging Tor Browser 
(see below), I propose operating a repository that relies on The Update 
Framework (TUF) [0]. TUF is a secure updater system designed to resist 
many classes of attacks [1]. Its based on Thandy (the work of Roger, 
Nick, Sebastian and others).


The advantage of this proposal is that (Tor based distros and others in 
general) can finally retire the TBB downloaders and shed the maintenance 
burden. Also there is no need to re-invent secure download mechanisms 
when there is a project that already covers this.


***

Rehash of previous discussions on the topic:

The major reasons why TBB is not in the Debian repository:

* The reproducible build system depends on a static binary image of 
(then Ubuntu) which runs counter to Debian policy.


* TBB is based on Firefox ESR and not Iceweasel which also runs into the 
"no duplicate source  package" policy of Debian.



Reasons for unavailability of TBB .deb in the Tor Project APT 
repository:


* The break neck speed of development

* Its not easily packaged and the amount of effort needed is better 
spent otherwise.




***

[0] https://theupdateframework.github.io/
[1] https://github.com/theupdateframework/tuf/blob/develop/SECURITY.md
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Paper: SoK: Towards Grounding Censorship Circumvention in Empiricism

2016-06-06 Thread bancfc
A paper presented at the Security and Human Behaviour 2016 conference 
examining how Tor pluggable transports old up against dozens of 
detection techniques. Censors focus more on detecting circumvention 
techniques during the setup phase than after the fact - opposite of most 
academic work in this area.


http://internet-freedom-science.org/circumvention-survey/sp2016/sok-sp2016.pdf
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-20 Thread bancfc

On 2016-05-19 15:28, isis agora lovecruft wrote:

ban...@openmailbox.org transcribed 7.3K bytes:

This brings up another point that digresses from the discussion:

Dan and Tanja support more conservative systems like McEliece because 
it
survived decades of attacks. In the event that cryptanalysis 
eliminates

Lattice crypto, McEliece will remain the only viable and well studied
alternative.


First, it's not viable (for Tor's use case).  I'll show that in a 
second.

Second, there are other bases for contruction of post-quantum secure
cryptosystems — not just some lattice problems or problems from coding 
theory.



How prohibitive are McEliece key sizes that they can never make
it into Tor?


Extremely prohibitive.  McEliece (using the original scheme proposed by
McEliece in 1978 [0] but with the recommended post-quantum secure 
parameters

of n=6960 k=5413 t=119) keys are 1 MB in size. [1]

Plugging this number into my previous email [2] in this thread:

  - average microdescriptor size would be ~1048992 bytes (252161% 
larger!)
  - the network would use 5043 Gb/s for directory fetches (this is 
roughly 33

times the current total estimated capacity of the network)

Result: no more Tor Network.


Can the size problem be balanced against longer re-keying times
for PFS - say once every 6 hours or more instead of every connection 
(there

are probably many other changes needed to accomodate it).


No.

Further, there are known attacks on McEliece resulting from ciphertext
malleability, i.e. adding codewords to a valid ciphertext yields 
another valid
ciphertext. [3]  This results in a trivial CCA2 attack where the 
adversary can
add a second message m' to a ciphertext c with c' = c ⊕ m'Gpub, where 
Gpub is

dot product of matrices G, the generating set of vectors, and P, the
permutation matrix.  One consequence of this ciphertext malleability is 
that
an attacker may use the relation between two different messages 
encrypted to
the same McEliece key to recover error bits, leading to the attacker 
being
able to recover the plaintext. [4]  Were we to use Shoup's KEM+DEM 
approach for
transforming a public-key encryption scheme into a mechanism for 
deriving a
shared secret (as is done in the NTRU Prime paper), this plaintext 
recovery
attack would result in the attacker learning the shared secret, meaning 
that
all authentication and secrecy in the handshake are lost completely.  
There

are possible workarounds to the CCA2 attacks (e.g. Kobara-Imai Gamma
Conversion) which generally increase both the implementational 
complexity of

the scheme and increase the number of bytes required to be sent in each
direction (by introducing redundancy into the codewords and 
uniformly-disbursed
randomness to ciphertexts), however these are inelegant, kludgey fixes 
for a

system not worth saving because ITS KEYS TAKE UP AN ENTIRE MEGABYTE.

They've worked on making tradeoffs of longer decryption times to get 
smaller
keys in their McBits implementation [1] but they still are no where 
near

Lattice ones (McEliece has very fast encoding/decoding so it works
out).


Yes, I'm aware.  Also, Peter (in CC and working with me on this 
proposal) is

the other author of the McBits paper.  If Peter thought McBits was more
suitable than NewHope for a Tor handshake, then I hope he'd have 
mentioned

that by now. :)

Also, for a minimum security of 128 bits, the smallest McBits keysize
available is 65 KB; that's still not doable for Tor.  (In fact, that 
would
result in 320 Gb/s being used for directory fetches — more than double 
the
current estimated total bandwidth capacity of the network — so thus 
again

there would be no Tor Network.)

With the averge webpage being 2 MBs in size, larger keys may not be 
that

bad?


Hopefully, everyone is now convinced with the arguments above that, 
yes,

larger keys are that bad.


[0]: http://ipnpr.jpl.nasa.gov/progress_report2/42-44/44N.PDF
[1]: http://pqcrypto.eu.org/docs/initial-recommendations.pdf
[2]: 
https://lists.torproject.org/pipermail/tor-dev/2016-May/010952.html

[3]: Overbeck, R., Sendrier, N. (2009). "Code-Based Cryptography".
   in Berstein, D.J., Buchmann, J., Dahmen, E. (Eds.),
   Post-Quantum Cryptography (pp. 134-136). Berlin: Springer 
Verlag.

   https://www.springer.com/us/book/9783540887010
[4]: http://link.springer.com/content/pdf/10.1007%2FBFb0052237.pdf

Best Regards,


Thanks for explaining Isis and hats off to you, Yawning and Peter for 
leading the PQ transition.

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-17 Thread bancfc

On 2016-05-16 18:53, isis agora lovecruft wrote:


Hello,

I am similarly excited to see a more comprehensive write up on the NTRU 
Prime

idea from Dan's blog post several years ago on the idea for a
subfield-logarithm attack on ideal-lattice-based cryptography. [0]  The 
idea to
remove some of the ideal structure from the lattice, while still aiming 
to
keep a similarly high, estimated minimum, post-quantum security 
strength as
Newhope (~2^128 bits post-quantum security and ~2^215 classical for 
NTRU
Prime, versus ~2^206 bits post-quantum and ~2^229 classical for 
Newhope) and
speed efficiencies competitive with NewHope, [1] by altering the 
original NTRU

parameters is very exciting, and I'm looking forward to more research
w.r.t. to the ideas of the original blog post (particularly the 
exploitation

of the ideal structure).  Additionally, the Toom6 decomposition of the
"medium-sized" 768-degree polynomial in NTRU Prime in order to apply 
Karatsuba
is quite elegant.  Also, I'm glad to see that my earlier idea [2] to 
apply a
stable sorting network in order to generate valid polynomial 
coefficients in

constant-time is also suggested within the NTRU Prime paper.

However, from the original NTRU Prime blog post, Dan mentioned towards 
the
end: "I don't recommend actually using NTRU Prime unless and until it 
survives
years of serious cryptanalytic attention, including quantitative 
evaluation of
specific parameter choices."  Léo Ducas, one of the NewHope authors, 
has
responded to the NTRU Prime paper with a casual cryptanalysis of its 
security
claims, [3] mentioning that "A quick counter-analysis suggests the 
security of
the proposal is overestimated by about 75 bits" bringing NTRU Prime 
down to

~2^140 classical security.


As you say, I think the security reduction is a bit steep but not 
catastrophic. However when I saw the NTRU Prime blog post before I 
interpreted to mean "its very likely that the powerful attack against 
the Smart–Vercauteren system can be extended against Lattice-based 
cryptosystems in general, that would completely break them". [0] This 
brings up another point that digresses from the discussion:


Dan and Tanja support more conservative systems like McEliece because it 
survived decades of attacks. In the event that cryptanalysis eliminates 
Lattice crypto, McEliece will remain the only viable and well studied 
alternative. How prohibitive are McEliece key sizes that they can never 
make it into Tor? Can the size problem be balanced against longer 
re-keying times for PFS - say once every 6 hours or more instead of 
every connection (there are probably many other changes needed to 
accomodate it). They've worked on making tradeoffs of longer decryption 
times to get smaller keys in their McBits implementation [1] but they 
still are no where near Lattice ones (McEliece has very fast 
encoding/decoding so it works out). With the averge webpage being 2 MBs 
in size, larger keys may not be that bad? Another interesting strategy 
for performance/efficiency is public key slicing and communicating them 
parallel. [2]





Current estimates on a hybrid BKZ+sieving attack combined with Dan's
subfield-logarithm attack, *if* it proves successful someday (which 
it's

uncertain yet if it will be), would (being quite generous towards the
attacker) roughly halve the pre-quantum security bits for n=1024 (since 
the
embedded subfield tricks are probably not viable), bringing NewHope 
down to
103/114 bits.  For the case of the hybrid handshake in this proposal, 
it still
doesn't matter, because the attacker would still also need to break 
X25519,
which still keeps its 2^128 bits of security.  (Not to mention that 
103-bits
post-quantum security is not terrible, considering that the attacker 
still
needs to do 2^103 computations for each and every Tor handshake she 
wants to

break because keys are not reused.)

Please feel free to correct me if I've misunderstood something.  IANAC
and all that.

Further, there are other problems in the NTRU Prime paper:

 1. Defining PFS as "the server erases its key every 60 seconds" seems
arbitrary and a bit strange.  It also makes the claims hard to 
analyse in
comparison with the NTRU literature (where, as far as I know, it's 
never
stated whether or not keys may be reused, and what downsides might 
come
with that) as well as with NewHope (where it's explicitly stated 
that keys

should not be reused).

 2. In Fig. 1.2, the number of bytes sent by a NewHope client is 1792, 
not

2048.  (To be fair, it was 2048 bytes in an earlier version.)

 3. The bandwidth estimates for NTRU Prime do not take into account 
that, due
to providing a key-encapsulation mechanism rather than a key 
exchange, the
client must already know the server's long-term public encryption 
key, in
order that the client may encrypt its public key to the server in 
the

first round of the handshake.

Further, and more specifically in the 

[tor-dev] User Behavior Tracking defenses in VMs

2016-03-14 Thread bancfc

Intended for qemu-discuss
/cc/ libvir-list, whonix-devel, tor-dev

***

Hello. I work on WhonixOS an anonymity distro based on Tor. This feature 
request is related to the topics of privacy and anonymity. Its a complex 
topic and probably not in your area of focus but I think it has 
important implications because security and privacy are very much 
related in today's hostile computing environment.


Virtualization is useful in presenting an identical environment and set 
of "hardware" for each user which goes a long way in creating an 
anonymity set of systems. That way a system attacker, advertisers and 
online trackers would not be able to fingerprint a user or their 
hardware.


The problem: Tracking techniques have become more sophisticated with 
time. They advanced from simple cookies to browser/device fingerprinting 
(which Tor Browser focuses on defeating) to user behavior 
fingerprinting. The latter is about profiling how a user types on a 
keyboard or uses a mouse [2].


Keystroke dynamics is a super creepy way to track users based on how 
long they press keys (dwell time) and the time between key presses (gap 
time). This is extremely accurate at identifying individuals because of 
how unique these measurements are. Advertising networks  (Google, 
Facebook...) that fingeprprint users on both the clearnet and Tor can 
deanonymize users. This technique is already actively used in the wild 
[6][7].



Potential Solutions:

Since input devices are all emulated its a great opportunity to stop 
this profiling technique.


* A security researcher designed a proof of concept plugin for Chrome 
browser that mitigates this. Implementing something like the PoC addon 
in [1] known as KeyBoardPrivacy. Some random delay in milliseconds in a 
50 millisecond range for dwell and gap times for the emulated keyboards 
is enough to skew the values to render this attack useless while not 
affecting performance.


* The changes made to Tor Borwser to make JS timers more coarse grained 
but constant (250ms for keyboard events) were not enough to stop 
keystroke dynamics fingerprinting because a malicious script can evict 
the cache and allow extrapolation of true timing events within 1-5ms 
accuracy .[3][5] Their goal is to instead add jitter to the timers [4]. 
A similar solution proposed in [4] can be implemented in all QEMU-KVM 
timers to mitigate both attacks.



[1] 
https://paul.reviews/behavioral-profiling-the-password-you-cant-change/
[2] 
http://jcarlosnorte.com/security/2016/03/06/advanced-tor-browser-fingerprinting.html
[3] 
https://www.lightbluetouchpaper.org/2015/07/30/double-bill-password-hashing-competition-keyboardprivacy/#comment-1288166

[4] https://trac.torproject.org/projects/tor/ticket/16110
[5] https://trac.torproject.org/projects/tor/ticket/1517
[6] http://scraping.pro/no-captcha-recaptcha-challenge/
[7] 
https://nakedsecurity.sophos.com/2013/11/01/facebook-to-silent-track-users-cursor-movements-to-see-which-ads-we-like-best/

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev