Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread tor

Actually...

A bad-relays mailing list would IMO take a degree of care to do right, 
considering that email gets gathered at the packet level by 
intelligence agencies who are expected to be initiating attacks. 
Sensitive stuff would belong as GPG or PGP emails or similar. Juicy 
details regarding bad-relays discussion should be tighter than even a 
closed mailing list, is my thought.







--
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor DNS

2014-07-31 Thread Mike Fikuart
Thanks for the response Ondrej.

I was thinking specifically for the .onion addresses as opposed to the 
conventional www addressing.  When the client first recognises the .onion 
domain, could a DNS be set up within Tor dealing only with .onion 
hostnames/domain space and conventional DNS requests for www be handled as 
currently (or developed as per proposal 129)?

My thought was that [hiddenservice].onion would be dealt with by the Tor 
NameServer to return the hostname (derived from public key).  From here the 
hidden services protocol would continue as per normal.  The only weakness would 
be the security of the information coming back from the D/NS pointing to the 
same hostname.onion; however with Tor circuit/s to the DNS this should negate 
such an attack.  Further to your comment about the request leaving the Tor 
network; these DNS requests would be handled internally, never leaving the 
network.  Is this feasible and reliably reproducible?

Just as there was the increasing need for the Tor search engine, this would (I 
believe) encourage more people to benefit from presenting their 
information/services in a usable format.

I note your further comments about the cost/resources of registering the TLD 
.onion, but there may be a time when there is a business model that can benefit 
from the investment and returns.
 
Yours sincerely
 
Mike Fikuart IEng MIET
 
Mobile: 07801 070580
Office: 020 33840275
Blog: mikefikuart
Skype: mikefikuart
Twitter: mikefikuart
LinkedIn: mikefikuart

On 30 Jul 2014, at 22:43, Ondrej Mikle ondrej.mi...@gmail.com wrote:

Hi,

On 07/30/2014 01:43 PM, Mike Fikuart wrote:
 I am aware that there is a Project Idea (under
 https://www.torproject.org/getinvolved/volunteer.html.en#improvedDnsSupport)
 point q. Improved DNS support for Tor;

I am the author of the proposal 219.

If you want DNS, you can make it work today via a tunnel with Unbound. One
sample howto: https://labs.nic.cz/page/993/ - DNSSEC is optional

 however has there been any exploration or development of a fully fledged
 DNS system for Tor

I have spent more than half a year trying to make it work. Most time spent was
due to DNSSEC and especially its latency - it is quite easy to have 20
roundtrips for one DNS request because of CNAME and DNAME. Which can take 5-20
seconds - incurring seemingly random errors (from the user's point of view).

On a good day with good circuit and heated cache you can get average ~3 secs
to resolve a request.

 that could give human readable names to hidden services?

This is not a good idea for many reasons. I'm not up-to-date with the latest
rendezvous protocol, but AFAIK the DNS request would be sent from different
exit node than the nodes used for rendezvous - which would in turn make
correlation attacks easier.

 If further consideration is given to also pursuing the registration of the
 .onion domain as a TLD, this could also open further publicity and revenue
 for the Tor Project.   The domain auctions for .tv and .co raised
 significant revenue for the Tuvalu and Colombian countries not to mention
 the managing organisations.

TLD costs $150k USD as down payment and requires additional infrastructure
to support the gTLS which is not cheap. There are much better ways how to
spend the resources.


 Has any of this been looked at previously or are there reasons why this is
 not being pursued?

DNS being 30+ years old has incredibly many special cases. There are
quick-and-dirty implementations but that's probably not what one would want
with anonymity software.

Ondrej
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk



signature.asc
Description: Message signed with OpenPGP using GPGMail
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor DNS

2014-07-31 Thread Lunar
Mike Fikuart:
 My thought was that [hiddenservice].onion would be dealt with by the
 Tor NameServer to return the hostname (derived from public key).

So if I understand correctly, you would like some entity to keep
a directory of human memorizable names pointing to hidden service
addresses.

The problem is this entity will be subject to pression from many
different actors. How should litigation over a unique name be handled?
What if some state decides this site should be censored? This is not a
very good place to be if you care about freedom of communication
(vs. only making money).

-- 
Lunar lu...@torproject.org


signature.asc
Description: Digital signature
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Tor DNS

2014-07-31 Thread CJ


On 07/31/2014 02:45 PM, Lunar wrote:
 Mike Fikuart:
 My thought was that [hiddenservice].onion would be dealt with by the
 Tor NameServer to return the hostname (derived from public key).
 
 So if I understand correctly, you would like some entity to keep
 a directory of human memorizable names pointing to hidden service
 addresses.
 
 The problem is this entity will be subject to pression from many
 different actors. How should litigation over a unique name be handled?
 What if some state decides this site should be censored? This is not a
 very good place to be if you care about freedom of communication
 (vs. only making money).
 
 
 

Heya! Just jumping in there in order to point a post on tor-dev ML:
https://lists.torproject.org/pipermail/tor-dev/2014-July/007258.html

this approach may be really interesting, I think it deserve an answer
:). It bypasses the centralized pressionable entity, keeps anonymity
and so on.

Cheers,

C.
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Question about the myfamily variable in torrc

2014-07-31 Thread Martin Kepplinger
Am 2014-07-31 01:02, schrieb Cypher:
 I'm about to fire up another exit relay and I want to make sure users
 are protected from picking both of my relays in the same chain. So I
 came across the MyFamily variable in the torrc file. I have two
 questions about how to properly set this variable:
 
 1. In the docs, it says NOT to use your relays fingerprint. What do I
 use instead and where can I get it?
 
 2. Do I need to list the current relay in its own MyFamily variable or
 just every other relay I run excluding the current one?
 
 Thanks,
 Cypher
 
 
as describen in the manual
https://www.torproject.org/docs/tor-manual.html.en you use fingerprints
or nicknames. fingerprints are preferred I guess. the format is $fingerpint

You can just maintain one list of all your relays and put the same line
in every torrc. Doesn't matter if the relay itself is in it.

martin
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Anders Andersson
And since it's not possible to do this right without any leaks due to
software bugs, planted flaws and insiders, the only thing it will lead
to is to make it impossible for the users to verify the decisions
leading up to which servers are bad. The NSA will still get your
precious warnings.

On Thu, Jul 31, 2014 at 1:44 PM,  t...@t-3.net wrote:
 Actually...

 A bad-relays mailing list would IMO take a degree of care to do right,
 considering that email gets gathered at the packet level by intelligence
 agencies who are expected to be initiating attacks. Sensitive stuff would
 belong as GPG or PGP emails or similar. Juicy details regarding bad-relays
 discussion should be tighter than even a closed mailing list, is my thought.






 --
 tor-talk mailing list - tor-talk@lists.torproject.org
 To unsubscribe or change other settings go to
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-31 Thread Joe Btfsplk
Wow, I'm surprised no one has questioned this before or has a reasonable 
explanation.
Why Panopticlick's total estimated entropy, *reported in the sentence 
_above_ their results table,* is much less than the sum of individual 
parameters' entropies - shown in the table:


_Currently, we estimate that your browser has a fingerprint that 
conveys *nn.nn bits* of identifying information_.


To arrive at a total *bits of identifying information*, do they ignore 
characteristics with entopies  certain values?
Because, in a typical test - w/ JS ENabled, the sentence may show total 
entropy of *13.xx bits.*
In the same test,  the sum of entropies from their included table may be 
*34.xx* bits identifying information.


Why is there such a huge difference?  To arrive at their total, what 
do they ignore - and WHY?
Or, do they take the results in the table  apply additional 
algorithms?  If so, do they detail that?

Thanks.

On 7/30/2014 9:12 AM, Joe Btfsplk wrote:

On 7/29/2014 4:35 PM, Ben Bailess wrote:

But here are some numbers that I just collected that
perhaps could be of use to you. This test was done with the latest TBB
(3.6.3) and Firefox versions on Linux (Fedora), with both JS on and off:

FF (private browsing) / JS disabled = 16 bits (not unique - one in 65,487)
FF (private browsing) / JS enabled = 22 bits (unique out of 4M samples)
FF (normal browsing) / JS disabled = 15.98 bits (not unique - one in
64,524)
FF (normal browsing) / JS enabled = 21.07 bits (not unique but one in
2,193,824 [roughly 2 matching entries in the sample]... so the other data
point may well have been me...)
TBB / JS enabled = 12.06 bits (not unique - one in 4,260)
TBB / JS disabled = 9.05 bits (not unique - one in 529 are same)


Thanks to all for your input.
OK, I slept  revisited Panopticlick fingerprinting results
https://panopticlick.eff.org.  Silly me - I was looking at the values
listed for each parameter, then assessing the total entropy for all
parameters shown.
Yes, if I look at the value they report *in a sentence* above the
results table, that total is far  than the sum of bits of identifying
information for all browser characteristics measured, as shown in their
results table.

For those that haven't looked at the site (or anything similar), the
total entropy that Panopticlick arrives at is far  than the sum of
individual values.
(The total is less than the sum of its parts ??)
Like when it says,
_Currently, we estimate that your browser has a fingerprint that
conveys *13.72 bits* of identifying information_*,* but the sum of all
parameters in that same test is *far*  than 13.72 bits.

Maybe someone more familiar w/ their algorithm to arrive at the grand
total *bits of identifying information, *(that they state in a
sentence, above the results table) can explain why their stated total
entropy for the browser tested is *so much lower* than the total of all
parameters shown in the table of test results.

I read their paper, https://panopticlick.eff.org/browser-uniqueness.pdf,
but missed any explanation of why that is so.
I have an idea why that may be true, but no (generic) mathematical
explanation.


--
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-31 Thread Mirimir
On 07/31/2014 11:44 AM, Joe Btfsplk wrote:
 Wow, I'm surprised no one has questioned this before or has a reasonable
 explanation.
 Why Panopticlick's total estimated entropy, *reported in the sentence
 _above_ their results table,* is much less than the sum of individual
 parameters' entropies - shown in the table:
 
 _Currently, we estimate that your browser has a fingerprint that
 conveys *nn.nn bits* of identifying information_.
 
 To arrive at a total *bits of identifying information*, do they ignore
 characteristics with entopies  certain values?
 Because, in a typical test - w/ JS ENabled, the sentence may show total
 entropy of *13.xx bits.*
 In the same test,  the sum of entropies from their included table may be
 *34.xx* bits identifying information.
 
 Why is there such a huge difference?  To arrive at their total, what
 do they ignore - and WHY?
 Or, do they take the results in the table  apply additional
 algorithms?  If so, do they detail that?
 Thanks.

I gather that entropy isn't always additive. I'd need to learn a lot
before saying much more about that. There's probably something useful in
https://panopticlick.eff.org/browser-uniqueness.pdf.

Having Javascript blocked is itself information, but I don't think that
Panopticlick is including that in the result.

 On 7/30/2014 9:12 AM, Joe Btfsplk wrote:
 On 7/29/2014 4:35 PM, Ben Bailess wrote:
 But here are some numbers that I just collected that
 perhaps could be of use to you. This test was done with the latest TBB
 (3.6.3) and Firefox versions on Linux (Fedora), with both JS on and off:

 FF (private browsing) / JS disabled = 16 bits (not unique - one in
 65,487)
 FF (private browsing) / JS enabled = 22 bits (unique out of 4M
 samples)
 FF (normal browsing) / JS disabled = 15.98 bits (not unique - one in
 64,524)
 FF (normal browsing) / JS enabled = 21.07 bits (not unique but one in
 2,193,824 [roughly 2 matching entries in the sample]... so the other
 data
 point may well have been me...)
 TBB / JS enabled = 12.06 bits (not unique - one in 4,260)
 TBB / JS disabled = 9.05 bits (not unique - one in 529 are same)

 Thanks to all for your input.
 OK, I slept  revisited Panopticlick fingerprinting results
 https://panopticlick.eff.org.  Silly me - I was looking at the values
 listed for each parameter, then assessing the total entropy for all
 parameters shown.
 Yes, if I look at the value they report *in a sentence* above the
 results table, that total is far  than the sum of bits of identifying
 information for all browser characteristics measured, as shown in their
 results table.

 For those that haven't looked at the site (or anything similar), the
 total entropy that Panopticlick arrives at is far  than the sum of
 individual values.
 (The total is less than the sum of its parts ??)
 Like when it says,
 _Currently, we estimate that your browser has a fingerprint that
 conveys *13.72 bits* of identifying information_*,* but the sum of all
 parameters in that same test is *far*  than 13.72 bits.

 Maybe someone more familiar w/ their algorithm to arrive at the grand
 total *bits of identifying information, *(that they state in a
 sentence, above the results table) can explain why their stated total
 entropy for the browser tested is *so much lower* than the total of all
 parameters shown in the table of test results.

 I read their paper, https://panopticlick.eff.org/browser-uniqueness.pdf,
 but missed any explanation of why that is so.
 I have an idea why that may be true, but no (generic) mathematical
 explanation.
 
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Öyvind Saether
 A bad-relays mailing list would IMO take a degree of care to do
 right, considering that email gets gathered at the packet level by 
 intelligence agencies who are expected to be initiating attacks. 
 Sensitive stuff would belong as GPG or PGP emails or similar. Juicy 
 details regarding bad-relays discussion should be tighter than even a 
 closed mailing list, is my thought.

Too bad ContactInfo email_address in torrc isn't ContactInfo
email_address gpgkey or alternatively 

ContactEmail x@y
ContactKey numbers

There is currenctly no safe way to be contacted by the Tor gang and
this is a problem with your idea.


signature.asc
Description: PGP signature
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Roger Dingledine
On Thu, Jul 31, 2014 at 07:58:34PM +0200, Öyvind Saether wrote:
 Too bad ContactInfo email_address in torrc isn't ContactInfo
 email_address gpgkey or alternatively 
 
 ContactEmail x@y
 ContactKey numbers

Hm? ContactInfo is just a string. You can set it however you like.

Here's the stanza in the torrc file:

## Administrative contact information for this relay or bridge. This line
## can be used to contact you if your relay or bridge is misconfigured or
## something else goes wrong. Note that we archive and publish all
## descriptors containing these lines and that Google indexes them, so
## spammers might also collect them. You may want to obscure the fact that
## it's an email address and/or generate a new address for this purpose.
#ContactInfo Random Person nobody AT example dot com
## You might also include your PGP or GPG fingerprint if you have one:
#ContactInfo 0x Random Person nobody AT example dot com

 There is currenctly no safe way to be contacted by the Tor gang and
 this is a problem with your idea.

Another piece of the problem is that we don't enforce any particular
info in the ContactInfo line (or even that you set one at all). I guess
we could send a challenge to the email address, and demand a reply,
and otherwise set a cap on the capacity that the network will assign you.
Seems like that would be another barrier to our volunteer relay operators,
but maybe for exit relays it's worth it.

--Roger

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] is torrc a manual page now? que?

2014-07-31 Thread Öyvind Saether
 Hm? ContactInfo is just a string. You can set it however you like.
 
 Here's the stanza in the torrc file:
 
 ## Administrative contact information for this relay or bridge. This
 line ## can be used to contact you if your relay or bridge is
 misconfigured or ## something else goes wrong. Note that we archive
 and publish all ## descriptors containing these lines and that Google
 indexes them, so ## spammers might also collect them. You may want to
 obscure the fact that ## it's an email address and/or generate a new
 address for this purpose. #ContactInfo Random Person nobody AT
 example dot com ## You might also include your PGP or GPG
 fingerprint if you have one: #ContactInfo 0x Random Person
 nobody AT example dot com

Thank you for this secret information. Please consider adding it to the
MANUAL PAGE since I've had my torrc since 2008 and it's not got any of
that classified information in it. 

 Another piece of the problem is that we don't enforce any particular
 info in the ContactInfo line (or even that you set one at all). I
 guess we could send a challenge to the email address, and demand a
 reply, and otherwise set a cap on the capacity that the network will
 assign you. Seems like that would be another barrier to our volunteer
 relay operators, but maybe for exit relays it's worth it.

I don't like this. You are saying you must have a e-mail and reply to
help out. That is a bad thing. A lot of services these days require you
to give them (and sometimes verify) a mobile phone number and I
personally don't use any of those services, sometimes I try to sign up
for such things but end up backing out because they demand too much
info. I see no reason why having a valid e-mail and using it to reply
should be a requirement for helping the Tor network.


signature.asc
Description: PGP signature
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] is torrc a manual page now? que?

2014-07-31 Thread Roger Dingledine
On Thu, Jul 31, 2014 at 08:18:07PM +0200, Öyvind Saether wrote:
 Thank you for this secret information. Please consider adding it to the
 MANUAL PAGE since I've had my torrc since 2008 and it's not got any of
 that classified information in it. 

Well, it certainly isn't meant to be a secret. You should totally check
out the torrc.sample file that comes with the tarball, or (equivalently)
the torrc file that comes with the deb. It has come a long way since
the last time you looked at it. :)

For reference, here is a copy of the torrc.sample file from Tor
0.1.0.1-rc, released March 28 2005:
https://gitweb.torproject.org/tor.git/blob/fddf560254da197b9e4726b6101f358f4b68e7af:/src/config/torrc.sample.in

   I
  guess we could send a challenge to the email address, and demand a
  reply, and otherwise set a cap on the capacity that the network will
  assign you. Seems like that would be another barrier to our volunteer
  relay operators, but maybe for exit relays it's worth it.
 
 I don't like this. You are saying you must have a e-mail and reply to
 help out. That is a bad thing. A lot of services these days require you
 to give them (and sometimes verify) a mobile phone number and I
 personally don't use any of those services, sometimes I try to sign up
 for such things but end up backing out because they demand too much
 info. I see no reason why having a valid e-mail and using it to reply
 should be a requirement for helping the Tor network.

I agree -- there are some real downsides to requiring working contact
info for relays.

On the other hand, there are some real downsides to having large relays
where we don't know the operators. We know the operators of many of
the large relays in the network, but there are many more where we don't
know them.

And of course, confirming that some email address can receive email is
not the same as knowing the operators.

So much to do,
--Roger

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] 'How to report bad relays' (blog entry)

2014-07-31 Thread Nusenu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

 I believe that the mere fact that a relay was blocked (via BadExit
 or reject) can be published.

Will dir auth ops make the list of rejected relays including its
reason for removal available to the public for transparency?

 where can we find a list of all relays that have been removed
 from the consensus in the past so far? I tried to find 'rejected'
 flagged relays on the following page but wasn't successful. 
 https://consensus-health.torproject.org/
 
 Rejected relays are not part of the consensus

If I understand option AuthDirReject correctlty, a dir auth. is not
voting for such a relay with the 'rejected' flag*, it is simply not
putting that relay in its vote at all.

So there is currently no way to determine whether a relay has been
disappeared by dir auth ops or the relay op decided to discontinue
running the relay itself?

(I was not looking for BadExit relays - they are easy to spot in the
consensus.)

*) actually there is no such thing as a 'rejected' flag
https://gitweb.torproject.org/torspec.git/blob/HEAD:/dir-spec.txt#l1604
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJT2pa6AAoJEDcK3SCCSvoeCS4P/RkGScRHL6E3sTjHAxCnorbE
rBD/lbyXEQf5g5y4B/eTuVob/qzT2BvgKPdimig1hBy0s0OR0CdFon+LsXKr18IK
PRzmZn4+i8xlQIWwq6ZEBqNLBruI3+LzVOCgf4T99V89OnKCmJyqNchZkiChXR1Q
T9zXNsrieJlND1DB3XwQ9IzYKdKyb0MREn+n7qkwcZqf28LLrLULUQ1kEHU2KqzK
AvA30vsgiLiyCelMTcHu1nMDanw3rCa0NhOFEcfYCiKuuWoyYz4kPkRJ5nFkeBfC
i4EVO3zqijN/En+Fn14+Yu2b80lqWgcxQjbH3uphwrbeG8BmJMaqnp0xR2kravQf
38zC/JJuUk7B/yBGtbpuqg9H/6DmcOluRIL/BbNwpQJgMF5VQA99i9eYXwwDtWT6
nhIgUwO918UClKtF9p05PGgmzQ8nILHCRioynklE8eQlmd/vwIYd4rj4NjNOTbXD
Q3OPADlHO4vwHLFPAuK75NGnMI48GdE3Ne25P26jpymd13TpWfWEEATEbikrjq05
I3ArDR3t8MX5WzD+HLt5EkRHXQvSMumQaNeZaeOkZR85K2wJTjJlsG7iLo9THVky
iinAxiSh0qcWCs31Gtmnw6C7Gv9KLL1VB5t+s0KFj4fcgUyNwwDafqL3HH5B3AXQ
abJUrVDpRfrvIZUXULXS
=n2Pt
-END PGP SIGNATURE-

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Nusenu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

 What would be the catch with making these reports and discussion
 public? Would it help bad actors? They will eventually find out
 about the consensus changes anyway, no?
 I think we need to distinguish between the report and the
 discussion. Ultimately, a report that is acted upon *cannot* remain
 secret.  As soon as a relay gets the BadExit flag, the operator can
 figure out that they got caught.  As a result, I believe that the
 mere fact that a relay was blocked (via BadExit or reject) can be
 published.  There is an ongoing discussion if we should do that.
 
 The discussion of observed malicious behaviour, however, can give
 the attacker a lot of knowledge which they can exploit in order to
 evade detection in the future.  Consider, for example, an HTTPS
 MitM attack which targets a small number of web sites.  If somebody
 reports only one of these targets, the attacker can spawn a new
 relay after discovery and simply reduce the set of targeted sites
 in order to remain under the radar.  This seems to be an uphill
 battle and it's difficult to have full transparency without giving
 dedicated adversaries a big advantage.

You might find the proven approach used in other areas (security bugs)
a viable option:

Keep the discussion private until a decission has been reached, make
it (the discussion) public once the report has been closed (whether
with or without a flag or reject entry).

This allows for transparency while at the same time shouldn't
interfere with ongoing investigations.
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJT2pdXAAoJEDcK3SCCSvoedmcP/jHkpAl9BrMmDGyFANWZyq0P
LHE83kCDHp52aGlLW46thjX0W9XEGaPM+bEjyuadL1wQZ6xCqzjqNz+onUP0Ry8y
Zr4mHJcWNQHHuRymFOBFmPyQcgaR633ZCbOLfluVWTyj5KGRgqDv3oXm9saz/T5M
CQr3SPsBtvPToPRgUHr0iUMBpy1L10IX8vcfwQXlk6gchQFP6sNdvWo/uUQB2Q4Q
zX8OPNVZPogBBMcrJ0LFMw1J+cCKwIddgp2vdE7HIoxOTWGF9EpBIGf5kWwoiFV0
tMFT1CmAID5qSYb3FXyh0WqjIueFcQypiD+WJNgMrFTG6RGx8dyp+oYiVucvg0o1
STWJrk2mGWj6NlBnCnDCvey1tE63wT3gYvnT5I1czNotTunWgPwwvlUd778AkbFz
YccPGuReELp29jyn5VjjwL3SmRzbjsaB/kFzUi2zLXc5xZtJ6ZkbayGt/rSNnjwS
2bjsGievaaG2oMMdTQAzG5daYlO52W6FKfgp8Ee6q8hh9D9dxb04TDA3hT7fLqYA
yiklsq0e+xs1qsgIgUJMNji8JvqNy17VecK3MG3DqbeeGNZBr2BaTynFwGGu4KMI
IyvW++I5p5C4tT40QAn+56nPixKW/4cTD+W6Wprw0Ff7jC6HyFz5RyJBpiyMnxkn
epZtvx0krEpg/0zQ3knL
=lXAd
-END PGP SIGNATURE-

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Philipp Winter
On Thu, Jul 31, 2014 at 07:21:59PM +, Nusenu wrote:
  I think we need to distinguish between the report and the
  discussion. Ultimately, a report that is acted upon *cannot* remain
  secret.  As soon as a relay gets the BadExit flag, the operator can
  figure out that they got caught.  As a result, I believe that the
  mere fact that a relay was blocked (via BadExit or reject) can be
  published.  There is an ongoing discussion if we should do that.
  
  The discussion of observed malicious behaviour, however, can give
  the attacker a lot of knowledge which they can exploit in order to
  evade detection in the future.  Consider, for example, an HTTPS
  MitM attack which targets a small number of web sites.  If somebody
  reports only one of these targets, the attacker can spawn a new
  relay after discovery and simply reduce the set of targeted sites
  in order to remain under the radar.  This seems to be an uphill
  battle and it's difficult to have full transparency without giving
  dedicated adversaries a big advantage.

 You might find the proven approach used in other areas (security bugs)
 a viable option:

 Keep the discussion private until a decission has been reached, make
 it (the discussion) public once the report has been closed (whether
 with or without a flag or reject entry).

 This allows for transparency while at the same time shouldn't
 interfere with ongoing investigations.

Yes, it is generally not a problem to publish a security bug which has
already been fixed.  In fact, it is encouraged because it spreads
knowledge and awareness.

Our situation is a slightly different one, though.  By publishing the
discussion about a relay (even if it has already been disabled), we are
harming future endeavours: As long as we keep using the same method to
check malicious relays, revealing this very method would spoil it
until we switch to a different one.  That's not quite the case with
security bugs.  If we had plenty of resources, scanning modules, and
methods, the story would be different but unfortunately that's not the
case.

One good example is documented in a recent research paper [0].  Section
5.2 describes how we chased a group of related malicious exit relays
over several months.  At some point the attackers began to sample MitM
attempts and target web sites.  Publishing our actions would probably
have helped the attackers substantially.

I think in addition to publishing *which* relays were disabled, it would
also be safe to publish *why* they were disabled.  We could add a short
sentence along the lines of running HTTPS MitM or running sslstrip.
Damian mentioned that in the other thread.

[0] https://petsymposium.org/2014/papers/Winter.pdf

Cheers,
Philipp
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] 'How to report bad relays' (blog entry)

2014-07-31 Thread Nusenu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

 Will dir auth ops make the list of rejected relays including
 its reason for removal available to the public for
 transparency?
 We did this for a time... ... but it was contentious so couldn't be
 continued.

Could you elaborate (or add references)? Why was it not acceptable to
list removed* relays (incl. the reason for removal)?


*) again: I'm only talking about completely removed relays since
'badexit' relays are visible/transparent anyway.
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJT2qiAAAoJEDcK3SCCSvoeCEMQAMcBm5EJfCZri896DJliwWy8
ZH2qTKDUWzd/4fBVh5j5LGu3B6tdNBo7ax+Xo1mzOZdQX52Gle1KzvDFVMpYLUQ0
DhQKIz0P0qEm0UB+YwXX71EFHbW1aq2Xn9ZbD+AxnBEc5gxiVPIqe910Bvxehb0w
Vol3rCegiP8sF7OTsFHIMLqlX9WRubA2sLBcsLBLyaHPiIfDs/O3w66Hp54jLtrB
p/jPFzmUmUyLL89xhiA0AJ+Lxv9xsyp5hxxvxdL2DPRimocn1VnQ66+bD6JVBA4j
CTZYjzAp0BsKlUYwt4EKPCnncNrEg9JcXdL33qvrxn0WIa4wUHMtd/fP1LfyRvdi
/lfvN6OBmnZN5QBPRN+Ipp8YpSzRakHJyUmdUND8kzrVJiEzpb/Z0qZa4ho/I/BI
U6/7JM12PLk7JRz4kab2OOSILbWPrDeg+GlZH7viNyFgBKneWNwmsp2HxFbslJEV
pf4//XswdckaLVtBsYglsYy7dFh2vAR/wJGl8yulm8z+3jod2VWUim1CfN7xJOuI
mv9X4WXvOjnebWocyQ/eZKq71H+JZ5QIER2FL4U8mgpAmTtO77q1SBWuW+duxqpm
cotHlF4KpsdWzq23n9xg+oHl/NF1VDVVX3zQk4za8CJ+TE4jl7hbf0bMV1+tzOii
kdBcetiuEVl2WWt112iJ
=GXT2
-END PGP SIGNATURE-

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Roger Dingledine
On Thu, Jul 31, 2014 at 04:12:33PM -0400, Philipp Winter wrote:
 One good example is documented in a recent research paper [0].  Section
 5.2 describes how we chased a group of related malicious exit relays
 over several months.  At some point the attackers began to sample MitM
 attempts and target web sites.  Publishing our actions would probably
 have helped the attackers substantially.

I think this is a really important point.

I'm usually on the side of transparency, and screw whether publishing
our methods and discussions impacts effectiveness.

But in this particular case I'm stuck, because the arms race is so
lopsidedly against us.

We can scan for whether exit relays handle certain websites poorly,
but if the list that we scan for is public, then exit relays can mess
with other websites and know they'll get away with it.

We can scan for incorrect behavior on various ports, but if the list
of ports and the set of behavior we do is public, then again relays are
free to mess with things we don't look for.

One way forward is a community-grown set of tools that are easy to
extend, and a bunch of people actively extending them and trying out
new things to look for. And then when they find something, they let
people know and others can verify it.

But then what -- we add that particular test to the set of official
tests that we do? And other people keep doing their secret I wonder if
I can catch a new one tests in the background? Or do we add all tests
that anybody has implemented, and try to cover everything that matters,
whatever that means?

Another way forward is to design and deploy an adaptive test system,
which e.g. searches google for some content, then fetches it with and
without Tor, and tries to figure out if the exit is messing with stuff.
That turns out to be a really tough research project, in that a lot of
web content is dynamic so your results will be mostly false positives
unless you account for that somehow. That's what SoaT was aiming to do:
https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/ExitAuthority/README.ExitScanning
Some researchers like Micah Sherr (Validating Web Content with Senser)
might have components to contribute here, to recognize and discard false
positives (but at the cost of introducing new false negatives too).

So in summary, if we're going to dabble here and there and notice bad
exits in an ad hoc way, I think secrecy is one of the few tools we have to
not totally lose the arms race. If we're going to play the arms race more
seriously, then secrecy should become an increasingly less relevant tool.

But as usual a lot of research remains if we want to get there from here.

--Roger

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Seth David Schoen
Roger Dingledine writes:

 But in this particular case I'm stuck, because the arms race is so
 lopsidedly against us.
 
 We can scan for whether exit relays handle certain websites poorly,
 but if the list that we scan for is public, then exit relays can mess
 with other websites and know they'll get away with it.

I think the remedy is ultimately HTTPS everywhere.  Then the problem
is reduced to checking whether particular exits try to tamper with the
reliability or capacity of flows to particular sites, or with the public
keys that those sites present.  (And figuring out whether HTTPS and its
implementations are cryptographically sound.)

The arms race of we don't really have any idea what constitutes correct
behavior for these vast number of sites that we have no relationship
with, but we want to detect when an adversary tampers with anybody's
interactions with them seems totally untenable, for exactly the reasons
that you've described.  But detecting whether intermediaries are allowing
correctly-authenticated connections to endpoints is almost tenable,
even without relationships with those endpoints.

(I do think that continuing to work on the untenable secret scanning
methods is great, because attackers should know that they may get caught.
It's a valuable area of impossible research.)

Yan has just added an HTTP nowhere option to HTTPS Everywhere, which
prevents a browser from making any HTTP connections at all.  Right now
that would probably be quite annoying and confusing to Tor Browser users,
but maybe with some progress on various fronts it could become less so.

-- 
Seth Schoen  sch...@eff.org
Senior Staff Technologist   https://www.eff.org/
Electronic Frontier Foundation  https://www.eff.org/join
815 Eddy Street, San Francisco, CA  94109   +1 415 436 9333 x107
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Why make bad-relays a closed mailing list?

2014-07-31 Thread Philipp Winter
On Thu, Jul 31, 2014 at 01:58:18PM -0700, Seth David Schoen wrote:
 Roger Dingledine writes:
 
  But in this particular case I'm stuck, because the arms race is so
  lopsidedly against us.
  
  We can scan for whether exit relays handle certain websites poorly,
  but if the list that we scan for is public, then exit relays can mess
  with other websites and know they'll get away with it.
 
 I think the remedy is ultimately HTTPS everywhere.  Then the problem
 is reduced to checking whether particular exits try to tamper with the
 reliability or capacity of flows to particular sites, or with the public
 keys that those sites present.  (And figuring out whether HTTPS and its
 implementations are cryptographically sound.)

It's not just about HTTP.  We've also seen attacks targeting SSH, SMTP,
IMAP, FTP, and XMPP.  While SSH's trust-on-first-use works reasonably
well and MitM attacks tend to be ineffective, XMPP is a different story
with at least one major client having had issues with authentication.

Cheers,
Philipp
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Question about the myfamily variable in torrc

2014-07-31 Thread Cypher
On 07/31/2014 09:09 AM, Martin Kepplinger wrote:
 Am 2014-07-31 01:02, schrieb Cypher:
 I'm about to fire up another exit relay and I want to make sure users
 are protected from picking both of my relays in the same chain. So I
 came across the MyFamily variable in the torrc file. I have two
 questions about how to properly set this variable:

 1. In the docs, it says NOT to use your relays fingerprint. What do I
 use instead and where can I get it?

 2. Do I need to list the current relay in its own MyFamily variable or
 just every other relay I run excluding the current one?

 Thanks,
 Cypher


 as describen in the manual
 https://www.torproject.org/docs/tor-manual.html.en you use fingerprints
 or nicknames. fingerprints are preferred I guess. the format is $fingerpint
 
 You can just maintain one list of all your relays and put the same line
 in every torrc. Doesn't matter if the relay itself is in it.

Many thanks, Martin! That answers my question.

Cypher


-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] understanding metrics bw graphs

2014-07-31 Thread Nusenu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I would have a question regarding the bw-flags graphs on metrics.tpo [1]

guard bw history
a) Does this include the entire accumulated traffic of relays having
the guard flag? (which would include the traffic of a guard relay
acting as a middle or exit relay - if it also has the exit flag)

or

b) Does this only and exclusively include the entire accumulated
traffic that *enters* the tor network via all guard relays?
(to actually do that the traffic accounting on relays would have to do
accounting accordingly on a per connection type level)

Which would bring me to the next question (which would be required to
actually do the required accounting for (b)): Is a relay able to tell
whether it is being used as first or second hop solely by looking at
packets (not their source)?


Same question applies to: exit bw history

Does your answer also apply to [2]?



[1] https://metrics.torproject.org/bandwidth.html#bandwidth-flags

[2] https://metrics.torproject.org/bandwidth.html#bwhist-flags
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJT2ri0AAoJEDcK3SCCSvoekO0P/2BMcSHEB5ErVI7Op3dboAS4
hIdXGNKCReN1diSWJmoCJ0WXiXMji74LRU+4AnIkTOiI5J5ROg3eZGgGVQozFY2t
rJG3j75bEWHacIu0OCwYl0rU4s/1TyHr562YmXFxeQfmSx1Ah8B9Rie5H6t0ENsB
+2uh91rtKnNNKsUyN7ApVwkPT8rPa6WXLwOnbV16Bi+iKEBpsRtVGA8WSEYRCXg6
82NfIi+TPowz9birInSJR2qXMQCUe5I3vIu4u9HAHPptQxbVbuvrKGawFzhpNb2j
HewKqy+/3ZDsdnxDdG9qq1SmVx2vZRPZMYdcCgA0iFS1jOwovTw1/G/OmewKXYHQ
VZrH1mLpLqFPukP/DHYCHgx4ydiRAZp70M0x0MjnGXDMjQdsoXubHUypCQ/40HA4
Etr073FksXNmEOwQ3bJ7Rx41JNEFj/8wL5UUJ3hWVg01DLBzqK0R4NpOekz8ZoWw
ATITKuFwv5RFFkF+nwOllX3+Fj0+RCMMaJKEme8UGGc4vKhpnTFzIfoJ9DB4BEM+
pISAgiDW0/0bWYSWMZNSPGcwkx5chhRvwyfmGF+sSOxQ0k5ATSK/cnJWonk/iTEA
mHfEDEnFKpjgnZWr+qsgL//U2He4bOUsDoF9JIsTb5t/f3CWcqAvcKY7vudPOlUq
ir+Y2XDMA+oSDYAzyoNT
=qxJ2
-END PGP SIGNATURE-

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Spoofing a browser profile to prevent fingerprinting

2014-07-31 Thread Öyvind Saether
 The simplest way would be to add a VE service on a TOR relay
 publishing a browser. You would not only be asking TOR relay
 providers to donate some bandwidth but also some processing power.

So I run the relay with a VE and now I get to see what everyone using
that VE is doing? I am not sure this is a good idea.


signature.asc
Description: PGP signature
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk