Re: [tor-dev] The Onion Name System (OnioNS)

2015-05-23 Thread Christian Grothoff
On 05/23/2015 06:26 PM, OnioNS Dev wrote:
 My design also assumes
 that there is no dynamic compromise of Tor routers (there's no
 incentive for an attacker to target Tor routers because of OnioNS)

I can live with explicitly stated design assumptions, but the claim that
there is no incentive for an attacker to target Tor routers because of
OnioNS is rather wild.

 With Namecoin, you have an inherent limit on the rate at which
 names can be registered.  Now, once people start squatting tons of
 .tor names, maybe even your bandwidth advantage disappears as the
 consensus may become rather large.
 That's a fair point. It's a hard problem to solve. It's subtle, but I
 also put in a requirement that the network must also ensure that the
 registration points to an available hidden service. Thus it forces
 innocent users and attackers to also spin up a hidden service. It's
 not foolproof, but it's better than nothing.

Interesting. Is a powerful adversary able to prevent registration by
somehow denying/delaying access to the new .onion service and
concurrently submitting a competing registration for the same name? I
remember such attacks being discussed for DNS, where a candidate's
search for available names might cause those to be quickly reserved by
some automatism as a means to extort name re-assignment fees.  Just
wondering if you considered this possibility. (IIRC Namecoin defends
against this by having an additional commit-and-reveal process where the
name is first reserved without the name itself being revealed).

 I've also been thinking
 about a proof-of-stake solution wherein the network only accepts a
 registration if the destination HS has been up for  X days.

Can a HS have more than one name?

 Another
 idea is to have the Quorum select a random time during the week, test
 for the availability of the hidden service, and then sign whether
 they saw the HS or not. Then the next Quorum could repeat this test,
 check the results from the previous Quorum, and void the Record if
 they also observed that the hidden service was down. I like both of
 these ideas, but I have not yet solidified their implementation so I
 was not ready to announce them in the paper.

Sure, good time to discuss them then ;-).

 Well, I prefer my hidden services to be really hidden and not
 public. I understand that this weakness is somewhat inherent in the
 design, but you should state it explicitly.
 Too late, your hidden services are already leaking across Tor's
 distributed hash table.

Today, yes. Tomorrow, who knows; I'm still hoping that the next
generation of HS will fix that, and hope try to get Tor to accept the
GNS-method for encrypting information in the DHT. Which, btw, is pretty
generic (we also use it in GNUnet-file sharing, and I have other plans
as well). In fact, I think if you look at the GNS crypto closely, it
might offer a way to encrypt most information in any DHT (and offer
confidentiality against an adversary that cannot guess the
name/label/keyword / perform a confirmation attack).

 There are even Tor technical reports and
 graphs on metrics.torproject.org which count them, which I assume
 also implies that they are enumerated. 

You are totally right about the status quo. I just would point out that
this may not be true in 2020 ;-).

 It's not about CPU power, it's
 about the honesty of nodes in the Tor network.

I understand that. But whether you do it on IPs, bandwidth or CPU, you
did lower the bar on the adversary.

 That gives me the
 globally collision-free property. Perhaps I have lowered the bar, but
 I do think it's a bit higher than Namecoin because OnioNS is only
 dependent on the distribution of identities, and not the distribution
 of CPU power.

I agree that it is probably easier to mount a 51% CPU-attack against
Namecoin than an attack against the OnioNS quorum.

 Could we please make the protocol a bit more general than this?
 Yes, I will look into it. Your description is helpful, but if you
 want to write up a protocol describing what you want on your end,
 I'll merge it into my protocol, and then we'll have a protocol that
 is compatible with both of our needs. I would be happy to modify my
 software accordingly.

I agree that we should have a write-up, but have to add that I hope to
delegate most of the writing to Jeff ;-).

Happy hacking!

Christian



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Simple Relay Scanner Example

2015-05-23 Thread Damian Johnson
Hi Roger, earlier this week you asked me for a simple
bandwidth-authority style scanner. This dovetailed nicely with a task
that's been on my todo list for ages to make a tutorial for manual
path selection so here ya go!

https://stem.torproject.org/tutorials/to_russia_with_love.html#custom-path-selection

Cheers! -Damian
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] The Onion Name System (OnioNS)

2015-05-23 Thread OnioNS Dev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 05/20/2015 12:18 AM, tor-dev-requ...@lists.torproject.org wrote:
 Furthermore, there is the question of time.  As .tor-names are pinned (if you 
 have a name, you get to keep it 'forever', right?), an adversary may invest 
 into the required resources to make the attack succeed* **briefly* (i.e. get 
 the required majority), then re-assign names to himself. New honest nodes 
 would pick up this hacked consensus, and thus it would persist even after the 
 adversary lost the majority required to establish it.  This is relevant, as 
 an attack against Tor user's anonymity only impacts the users as long as the 
 attack itself lasts, so the gains between the two attacks (temporary vs. 
 forever) change the economic incentives for performing them.

 So I'm not sure it is such an obvious choice to just rely on an honest 
 majority of (long-term/etc.) Tor routers.  I'm not saying it is bad; however, 
 simply saying that if those routers are compromised all is lost anyway is not 
 quite correct.
To carry out the attack you describe, they would need to have control of enough 
colluding Tor nodes so that they control the largest agreeing subset in the 
Quorum. It's not about PoW to control the Quorum, the Quorum is a group of Tor 
routers. My design also assumes that there is no dynamic compromise of Tor 
routers (there's no incentive for an attacker to target Tor routers because of 
OnioNS) so we can consider a static level of compromise. As I've shown in my 
analysis, if the Quorum is large enough, the chances of selecting a malicious 
Quorum, either per-selection or cumulatively, is extremely low even at 
Tor-crippling levels of collusion.

 Ok, let's assume a c4.xlarge EC2 instance (which is ~i7) takes 4h to do this 
 (on all cores).  For one month, the price is USD 170, which means the 
 registration cost is 70 cents/name (for eternity, or do I have to do this 
 repeatedly? Don't recall if you require a fresh PoWs). Anyway, 4h sounds 
 pretty inconvenient to a user, but as you can see is still nothing for a 
 professional domain name squatter who today pays closer to 100x that to squat 
 for a year.  I predict most 'short' names will be taken in no time if this is 
 deployed.

 With Namecoin, you have an inherent limit on the rate at which names can be 
 registered.  Now, once people start squatting tons of .tor names, maybe even 
 your bandwidth advantage disappears as the consensus may become rather large.
That's a fair point. It's a hard problem to solve. It's subtle, but I also put 
in a requirement that the network must also ensure that the registration points 
to an available hidden service. Thus it forces innocent users and attackers to 
also spin up a hidden service. It's not foolproof, but it's better than 
nothing. I've also been thinking about a proof-of-stake solution wherein the 
network only accepts a registration if the destination HS has been up for  X 
days. Another idea is to have the Quorum select a random time during the week, 
test for the availability of the hidden service, and then sign whether they saw 
the HS or not. Then the next Quorum could repeat this test, check the results 
from the previous Quorum, and void the Record if they also observed that the 
hidden service was down. I like both of these ideas, but I have not yet 
solidified their implementation so I was not ready to announce them in the 
paper.

 Well, I prefer my hidden services to be really hidden and not public. I 
 understand that this weakness is somewhat inherent in the design, but you 
 should state it explicitly.
Too late, your hidden services are already leaking across Tor's distributed 
hash table. There are even Tor technical reports and graphs on 
metrics.torproject.org which count them, which I assume also implies that they 
are enumerated. I can't remember where I read it, but I do recall reading some 
report about how the researcher spun up a number of hidden services, didn't 
tell anyone about them, but then observed that someone connected to them anyway 
from time to time. Someone out there is enumerating HSs. Tor's HS protocol 
isn't designed to hide the existence of HSs, my system isn't either. I can 
state it explicitly, but there's no practical way around it as far as I can see.

 See, that's the point: Namecoin (and your system) assume a different 
 adversary model than what Zooko intended to imply when he formulated his 
 triangle.  When Zooko said secure, he meant against an adversary that does 
 have more CPU power than all of the network combined and unlimited 
 identities.  When you say secure, you talk about Namecoin's adversary 
 model where the adversary is in the minority (CPU and 
 identity/bandwidth-wise).

 Thus, it is unfair for you to say that your system 'solves' Zooko's triangle, 
 as you simply lowered the bar.
As you say, Namecoin assumes that it has more CPU power than adversaries. 
Perhaps I should have clarified more 

Re: [tor-dev] OpenWrt cross compile build error in 0.2.6.8

2015-05-23 Thread Lars Boegild Thomsen
On Friday 22 May 2015 09:20:29 Shawn Nock wrote:
 Will you post the Makefile for the buildroot package?

Sorry guys, I sorted this one out.  The culprit was actually my added -stdÉ9 
that made lots of other stuff break.  Taking that out it boiled down to a few 
uses of:

for (int i = 0; ..

Changing those to:

int i;
for (i 
Fixed the build.

I have attached a patch.

--
Lars Boegild Thomsen
https://reclaim-your-privacy.com
Jabber/XMPP: l...@reclaim-your-privacy.com

signature.asc
Description: This is a digitally signed message part.
--- a/src/common/address.c
+++ b/src/common/address.c
@@ -499,7 +499,8 @@ tor_addr_parse_PTR_name(tor_addr_t *resu
   return -1;
 
 cp = address;
-for (int i = 0; i  16; ++i) {
+int i;
+for (i = 0; i  16; ++i) {
   n0 = hex_decode_digit(*cp++); /* The low-order nybble appears first. */
   if (*cp++ != '.') return -1;  /* Then a dot. */
   n1 = hex_decode_digit(*cp++); /* The high-order nybble appears first. */
--- a/src/test/test_routerlist.c
+++ b/src/test/test_routerlist.c
@@ -30,7 +30,8 @@ test_routerlist_initiate_descriptor_down
   smartlist_t *digests = smartlist_new();
   (void)arg;
 
-  for (int i = 0; i  20; i++) {
+  int i; 
+  for (i = 0; i  20; i++) {
 smartlist_add(digests, (char*)prose);
   }
 
@@ -74,7 +75,8 @@ test_routerlist_launch_descriptor_downlo
   char *cp;
   (void)arg;
 
-  for (int i = 0; i  100; i++) {
+  int i;
+  for (i = 0; i  100; i++) {
 cp = tor_malloc(DIGEST256_LEN);
 tt_assert(cp);
 crypto_rand(cp, DIGEST256_LEN);
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev