Re: [freenet-dev] Pitch Black Attack - Analysis, Code, Etc.

2013-01-31 Thread Michael Grube
On Mon, Jan 28, 2013 at 6:22 PM, Matthew Toseland t...@amphibian.dyndns.org
 wrote:

 On Monday 28 Jan 2013 21:39:54 Michael Grube wrote:
  On Mon, Jan 28, 2013 at 4:14 PM, Matthew Toseland 
 t...@amphibian.dyndns.org
   wrote:
 
   On Monday 28 Jan 2013 18:09:07 Michael Grube wrote:
On Sun, Jan 27, 2013 at 12:30 PM, Matthew Toseland 
t...@amphibian.dyndns.org wrote:
   
 On Sunday 27 Jan 2013 05:02:17 Michael Grube wrote:
  Hi everyone,
 
  Around this time last year I started on work to simulate the
 pitch
   black
  attack working against Oskar Sandberg's swapping algorithm that
 is
  implemented for use with darknet. The work is essentially
 incomplete,
 but I
  did enough to get an idea of how well Oskar's proposed solution
 to
   the
  Black Attack works. Hopefully the information that follows can
   provide
 some
  insight for anybody working on this problem.

 It looks like we have a good chance of using Oskar's original plan.
   Maybe
 even getting it published, with some help (carl might help even if
 you
 don't have time?).
 
  The code is messy, so I'm going to do a walkthrough of how
 exactly I
   ran
  the simulation.
 
  To start off, my code is available at
 http://github.com/mgrube/pbsim
   .
 
  Let's start. The first thing I did was create the small world
 network
 that
  is assumed in the darknet. The graph size can obviously be of any
   size,
 but
  in our experiment we'll make the network size 1000 nodes. This is
   pretty
  simple in python and can be accomplished with one line in the
   networkx
  library:

 You did check that there isn't a scalability issue? :)
   
I tested with 10,000 nodes as well and the results did not vary by
 much.
The most important difference I noticed was that 2 attackers became a
   less
significant number. Not that this really means anything to a would-be
attacker.
   
If you are convinced that scalability is a problem, I can add
 support for
threads to what I have and make it easy to simulate 100,000 or 1M or
whatever number we want to try.
  
   I don't know. In my experience threads complicate matters quite a bit.
  
 
  Certainly they can. In this case it would help, however.
 
 

 I wonder if it would be worth writing up the natural pitch black
 via
 churn evolution we saw in ~ 2008. Basically, when you have churn,
   newbies
 end up with the worst locations i.e. those furthest away from the
   main
 clusters. So even without an attack, the network locations become
 more
   and
 more clustered. We fixed it by periodic randomisation, which
 seemed to
   have
 relatively little cost - the nodes quickly got their old locations
   back,
 more or less.

 Another thing we want to look into is what the cost is of swapping
 (especially on a growing network, or even two networks merging) in
   terms of
 losing datastores due to locations changing. That might need more
   detailed
 simulations...
   
I will see what I can do about looking into these sometime later this
   week.
  
   Good.
   
  We can see the aftermath by looking at a histogram of node
   locations. The
  randomize function uses a random function to assign each node a
   location,
  so first let's look at a histogram of locations before the
 attack:
 

  
 http://127.0.0.1:/CHK@ODZ1s5SDYrVvyNo0ONh4O9rtI~pcVmTSShh47UFPY5U,SKJfkX2eswHMrqidDWTUoZKGMaZ9yt0l6uLUZMmxOqk,AAMC--8/preattacklocations.PNG

 Suprisingly wide range of concentrations.
 
  The biasing locations for these attack nodes were:
  .6935
  .1935
  .9435
  .4435
  .4665
  .9665
  .7165
  .2165
 
  Our histogram of node locations now shows a disproportionate
 number
   of
  nodes with those locations:
 
 

  
 http://127.0.0.1:/CHK@aI0BN0NXEjU--8dFtCYZwPwUWcM0rpamIf3lnv7FfHc,SCr2NPJYZVpFJKSf-qDYerQTQyDfdoV3-DeX-W1e91I,AAMC--8/postattack.PNG

 Scary!
   
Quite.
   
  So, the attack with only two nodes is obviously very effective.
 It's
  important to note that the attack simulation method assumed that
   nodes
 were
  attacking before the swapping algorithm had a chance to organize
 the
  network. This is something of a worst case scenario.

 So this is attacking after we've done some swapping but not enough
 to
 reach the point of diminishing returns?
 
  Now, let's measure the effectiveness of Oskar Sandberg's proposed
 solution,
  which is described on the bug tracker:
  https://bugs.freenetproject.org/view.php?id=3919
 
  We can test sandberg's solution by using:
 
  sandbergsolution(sandberg_solution_network, attackers, .037)
 
  The last parameter, .037 is the distance threshold for
   re-randomizing a
  node's location. To be 

Re: [freenet-dev] Pitch Black Attack - Analysis, Code, Etc.

2013-01-31 Thread Michael Grube
On Mon, Jan 28, 2013 at 6:22 PM, Matthew Toseland t...@amphibian.dyndns.org
 wrote:

 On Monday 28 Jan 2013 21:39:54 Michael Grube wrote:
  On Mon, Jan 28, 2013 at 4:14 PM, Matthew Toseland 
 t...@amphibian.dyndns.org
   wrote:
 
   On Monday 28 Jan 2013 18:09:07 Michael Grube wrote:
On Sun, Jan 27, 2013 at 12:30 PM, Matthew Toseland 
t...@amphibian.dyndns.org wrote:
   
 On Sunday 27 Jan 2013 05:02:17 Michael Grube wrote:
  Hi everyone,
 
  Around this time last year I started on work to simulate the
 pitch
   black
  attack working against Oskar Sandberg's swapping algorithm that
 is
  implemented for use with darknet. The work is essentially
 incomplete,
 but I
  did enough to get an idea of how well Oskar's proposed solution
 to
   the
  Black Attack works. Hopefully the information that follows can
   provide
 some
  insight for anybody working on this problem.

 It looks like we have a good chance of using Oskar's original plan.
   Maybe
 even getting it published, with some help (carl might help even if
 you
 don't have time?).
 
  The code is messy, so I'm going to do a walkthrough of how
 exactly I
   ran
  the simulation.
 
  To start off, my code is available at
 http://github.com/mgrube/pbsim
   .
 
  Let's start. The first thing I did was create the small world
 network
 that
  is assumed in the darknet. The graph size can obviously be of any
   size,
 but
  in our experiment we'll make the network size 1000 nodes. This is
   pretty
  simple in python and can be accomplished with one line in the
   networkx
  library:

 You did check that there isn't a scalability issue? :)
   
I tested with 10,000 nodes as well and the results did not vary by
 much.
The most important difference I noticed was that 2 attackers became a
   less
significant number. Not that this really means anything to a would-be
attacker.
   
If you are convinced that scalability is a problem, I can add
 support for
threads to what I have and make it easy to simulate 100,000 or 1M or
whatever number we want to try.
  
   I don't know. In my experience threads complicate matters quite a bit.
  
 
  Certainly they can. In this case it would help, however.
 
 

 I wonder if it would be worth writing up the natural pitch black
 via
 churn evolution we saw in ~ 2008. Basically, when you have churn,
   newbies
 end up with the worst locations i.e. those furthest away from the
   main
 clusters. So even without an attack, the network locations become
 more
   and
 more clustered. We fixed it by periodic randomisation, which
 seemed to
   have
 relatively little cost - the nodes quickly got their old locations
   back,
 more or less.

 Another thing we want to look into is what the cost is of swapping
 (especially on a growing network, or even two networks merging) in
   terms of
 losing datastores due to locations changing. That might need more
   detailed
 simulations...
   
I will see what I can do about looking into these sometime later this
   week.
  
   Good.
   
  We can see the aftermath by looking at a histogram of node
   locations. The
  randomize function uses a random function to assign each node a
   location,
  so first let's look at a histogram of locations before the
 attack:
 

  
 http://127.0.0.1:/CHK@ODZ1s5SDYrVvyNo0ONh4O9rtI~pcVmTSShh47UFPY5U,SKJfkX2eswHMrqidDWTUoZKGMaZ9yt0l6uLUZMmxOqk,AAMC--8/preattacklocations.PNG

 Suprisingly wide range of concentrations.
 
  The biasing locations for these attack nodes were:
  .6935
  .1935
  .9435
  .4435
  .4665
  .9665
  .7165
  .2165
 
  Our histogram of node locations now shows a disproportionate
 number
   of
  nodes with those locations:
 
 

  
 http://127.0.0.1:/CHK@aI0BN0NXEjU--8dFtCYZwPwUWcM0rpamIf3lnv7FfHc,SCr2NPJYZVpFJKSf-qDYerQTQyDfdoV3-DeX-W1e91I,AAMC--8/postattack.PNG

 Scary!
   
Quite.
   
  So, the attack with only two nodes is obviously very effective.
 It's
  important to note that the attack simulation method assumed that
   nodes
 were
  attacking before the swapping algorithm had a chance to organize
 the
  network. This is something of a worst case scenario.

 So this is attacking after we've done some swapping but not enough
 to
 reach the point of diminishing returns?
 
  Now, let's measure the effectiveness of Oskar Sandberg's proposed
 solution,
  which is described on the bug tracker:
  https://bugs.freenetproject.org/view.php?id=3919
 
  We can test sandberg's solution by using:
 
  sandbergsolution(sandberg_solution_network, attackers, .037)
 
  The last parameter, .037 is the distance threshold for
   re-randomizing a
  node's location. To be 

[freenet-dev] Maven revisited

2013-01-31 Thread Ian Clarke
I was thinking about the fact that we still build Freenet using the tools
that were available to us a decade ago, while the Java world has moved on
to more sophisticated dependency management tools like Maven.

I recall that the reason for not using Maven is that it doesn't operate
over a secure connection, and it leaves us open to the compromise of any of
Freenet's dependencies Maven repositories.

This is despite the fact that no such compromise as ever occurred on any
project that I'm aware of, and since we don't do code audits of Freenet's
current dependencies, our current approach doesn't immunize us against it
anyway.

However, one approach that might alleviate this concern is that we run our
own Maven repository which will host any dependencies we need, and then
configure Maven not to pull from the central Maven repos.

There is the other issue that Maven can be a PITA to use, however there are
similar alternatives: http://www.streamhead.com/maven-alternatives/

Thoughts?

Ian.

-- 
Ian Clarke
Founder, The Freenet Project
Email: i...@freenetproject.org
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Michael Grube
On Thu, Jan 31, 2013 at 11:36 AM, Ian Clarke i...@freenetproject.org wrote:

 I was thinking about the fact that we still build Freenet using the tools
 that were available to us a decade ago, while the Java world has moved on
 to more sophisticated dependency management tools like Maven.

 I recall that the reason for not using Maven is that it doesn't operate
 over a secure connection, and it leaves us open to the compromise of any of
 Freenet's dependencies Maven repositories.

 This is despite the fact that no such compromise as ever occurred on any
 project that I'm aware of, and since we don't do code audits of Freenet's
 current dependencies, our current approach doesn't immunize us against it
 anyway.

 However, one approach that might alleviate this concern is that we run our
 own Maven repository which will host any dependencies we need, and then
 configure Maven not to pull from the central Maven repos.




 There is the other issue that Maven can be a PITA to use, however there
 are similar alternatives: http://www.streamhead.com/maven-alternatives/



 Thoughts?



Maven's really not that bad. If people are absolutely terrified about
depedencies being compromised, maybe make a quick script to do a checksum
on the dependencies once they're donwloaded.




 Ian.

 --
 Ian Clarke
 Founder, The Freenet Project
 Email: i...@freenetproject.org

 ___
 Devl mailing list
 Devl@freenetproject.org
 https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

[freenet-dev] Pitch Black Attack - Analysis, Code, Etc.

2013-01-31 Thread Michael Grube
Old response that was never forwarded.

On Sun, Jan 27, 2013 at 9:06 AM, Arne Babenhauserheide arne_...@web.dewrote:

 Hi Snark,

 Thank you for posting! Your analysis looks pretty good.

 Am Sonntag, 27. Januar 2013, 00:02:17 schrieb Michael Grube:

  Not bad! There is obviously still some influence but the location
  distribution has evened out noticeably.
 
  There is one down side to this solution, however, and that is that it
  appears to affect search performance. By how much, I am not sure, but our
  link length distribution is now looking less ideal:
 
 
 http://127.0.0.1:/CHK@TdODwHOdC9peiHYGtTxDa9yy9v0lXSHKWW4G7wM5-~A,OIy08YxNZdg4M3vpgm7wETOhUvU3RYFzrkJQ7No9poE,AAMC--8/deterioratinglinkdist.PNG

 What happens if you now apply normal swapping to this distribution? Does
 it get better or do we see a general problem of swapping?


Do you mean without attackers? Changing back to the original swapping
method with attackers makes the location distribution fall apart again.
Without attackers from this point on, the link length distribution moves
back to ideal distribution again.  I just ran this again to be sure, but
the resulting graph is nothing new, so I didn't insert it.



 (in some tests while discussing probes, a swapping example I wrote worked
 well for some stuff, but broke down with certain configurations)

 The link length distribution could be a pretty big problem…

 Compare it with the real distribution:


 http://127.0.0.1:/USK@pxtehd-TmfJwyNUAW2Clk4pwv7Nshyg21NNfXcqzFv4,LTjcTWqvsq3ju6pMGe9Cqb3scvQgECG81hRdgj5WO4s,AQACAAE/statistics/148/plot_link_length.png


My first thought is that I'd like to see this graph with link length from 0
to 1. Also, it's important to note that this graph is percent of nodes with
that link length or smaller, whereas the graphs I inserted are counts of
the number of links with the distance marked on the x axis. This difference
might have been obvious to some people, but I just want to be sure
everybody sees that.

I can't promise anything immediate, but I was already implementing the
search algorithm in my simulation. I can try to get some actual numbers by
this time next week.



 Best wishes,
 Arne

 PS: I also like it that you used freenet itself for hosting!


Of course =)


 --
 Unpolitisch sein
 heißt politisch sein,
 ohne es zu merken.
 - Arne (http://draketo.de)



___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Matthew Toseland
On Thursday 31 Jan 2013 17:50:32 Michael Grube wrote:
 On Thu, Jan 31, 2013 at 11:36 AM, Ian Clarke i...@freenetproject.org wrote:
 
  I was thinking about the fact that we still build Freenet using the tools
  that were available to us a decade ago, while the Java world has moved on
  to more sophisticated dependency management tools like Maven.
 
  I recall that the reason for not using Maven is that it doesn't operate
  over a secure connection, and it leaves us open to the compromise of any of
  Freenet's dependencies Maven repositories.
 
  This is despite the fact that no such compromise as ever occurred on any
  project that I'm aware of, and since we don't do code audits of Freenet's
  current dependencies, our current approach doesn't immunize us against it
  anyway.

Have you actually tried to find out?
 
  However, one approach that might alleviate this concern is that we run our
  own Maven repository which will host any dependencies we need, and then
  configure Maven not to pull from the central Maven repos.
 
  There is the other issue that Maven can be a PITA to use, however there
  are similar alternatives: http://www.streamhead.com/maven-alternatives/
 
  Thoughts?
 
 Maven's really not that bad. If people are absolutely terrified about
 depedencies being compromised, maybe make a quick script to do a checksum
 on the dependencies once they're donwloaded.

Maven does not do any sort of signature checking. Maven's own repository 
doesn't even do SSL IIRC. It is therefore not suitable for building binaries 
that will be distributed. In my view this is true of any binaries that will be 
distributed to anyone, but it certainly isn't true of building binaries for an 
auto-updater capable of deploying 5,000 nodes within an hour - a significant 
target for conventional malware even if it wasn't for the fact that some of 
these people really do need their privacy.

If we run our own repository:
- We need to maintain it. This is more unnecessary work.
- We need to host it. This is more CPU usage on the small, cheap, rather 
limited VM that runs the website etc.

But most importantly, we need it to be reasonably easy to *develop Freenet 
anonymously*. This is not a theoretical aspiration. There are anonymous 
developers today, and some of them are extremely productive at times.

Exactly what problem are you trying to solve here? It's really not that hard to 
build Freenet. Granted it should be easier; the immediate problem is you need 
not only freenet-ext.jar (which the build scripts will fetch for you if you set 
one line in a config file; the first time you run ant it will tell you this), 
but also the bouncycastle jar, which isn't auto-fetched.

If you really want security advice ask nextgens. But it looks to me like Maven 
is hopeless for our purposes. For a non-security-related project, for a single 
developer who doesn't distribute the resulting binaries, fine. For a corporate 
setup where both the developers and the server are inside the firewall, fine. 
But for us, it does not make sense.

Regarding not auditing dependencies, we do try to obtain clean copies of our 
dependencies. Also most of them aren't security critical, and so aren't updated 
regularly. Ordinarily this would be a bad thing - but it does reduce the number 
of opportunities for malware to slip in. The biggest dependency is db4o, and 
IMHO we should get rid of it soon, it's been nothing but a nightmare. Whenever 
we have looked into updating it we have found new and wonderful bugs, and so 
haven't bothered...

In any case, the fact that we haven't audited every line of some of our 
dependencies is not an excuse for failing to perform basic due dilligence on 
our build process. Freenet is security sensitive, it has an auto-updater, it's 
not safe for us to just grab jars from wherever and hope for the best, which 
seems to be what most of the Java community do. And it's what Maven does too, 
without any form of authentication.

The best person to ask for security advice on this sort of issue is Nextgens 
anyway. He's been around lately.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Pitch Black Attack - Analysis, Code, Etc.

2013-01-31 Thread Matthew Toseland
On Thursday 31 Jan 2013 16:16:38 Michael Grube wrote:
  
  So how exactly do you use the 0.037 constant?
 
  If you don't have a peer with distance greater than 0.037 * (distance from
  random location to nearest node to the random location), then you reset?
  (This will break for nodes with very small degree...)
 
 No. It's not based on your immediate peer - a probe doing a DFS search
 returns the closest result to some key that is selected at random. If the
 closest node identifier to the randomly selected location is further than
 some distance, the node originating the probe needs to randomize its
 location.

I don't understand. The node originating the search is not the victim. It 
doesn't have a wrong location. So why does this help at all?

Oskar's proposal, as you quoted:

   From your notes in the bug tracker:
  
   Pick a key randomly, route for it with a special query that returns the
   nearest node identifier to the key found. If the closest you can get is
  much
   further than your distance to your neighbors, give up your current
  position
   for the random one. The definition of much further needs to be
  determined
   experimentally, but it shouldn't be an issue (since the attack in
  question
   works by putting a neighbor thousands of times closer to you then it
  should
   be).

In other words, we use the probe to find out how far we *should* be from our 
neighbours. Then if we are much too close, we are probably the victim of an 
attack, so we randomise.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Michael Grube
On Thu, Jan 31, 2013 at 2:31 PM, Michael Grube michael.gr...@gmail.comwrote:



 On Thu, Jan 31, 2013 at 1:59 PM, Matthew Toseland 
 t...@amphibian.dyndns.org wrote:

 On Thursday 31 Jan 2013 17:50:32 Michael Grube wrote:
  On Thu, Jan 31, 2013 at 11:36 AM, Ian Clarke i...@freenetproject.org
 wrote:
 
   I was thinking about the fact that we still build Freenet using the
 tools
   that were available to us a decade ago, while the Java world has
 moved on
   to more sophisticated dependency management tools like Maven.
  
   I recall that the reason for not using Maven is that it doesn't
 operate
   over a secure connection, and it leaves us open to the compromise of
 any of
   Freenet's dependencies Maven repositories.
  
   This is despite the fact that no such compromise as ever occurred on
 any
   project that I'm aware of, and since we don't do code audits of
 Freenet's
   current dependencies, our current approach doesn't immunize us
 against it
   anyway.

 Have you actually tried to find out?
  
   However, one approach that might alleviate this concern is that we
 run our
   own Maven repository which will host any dependencies we need, and
 then
   configure Maven not to pull from the central Maven repos.
  
   There is the other issue that Maven can be a PITA to use, however
 there
   are similar alternatives:
 http://www.streamhead.com/maven-alternatives/
  
   Thoughts?
  
  Maven's really not that bad. If people are absolutely terrified about
  depedencies being compromised, maybe make a quick script to do a
 checksum
  on the dependencies once they're donwloaded.

 Maven does not do any sort of signature checking. Maven's own repository
 doesn't even do SSL IIRC.


 http://maven.apache.org/guides/mini/guide-repository-ssl.html



 It is therefore not suitable for building binaries that will be
 distributed. In my view this is true of any binaries that will be
 distributed to anyone, but it certainly isn't true of building binaries for
 an auto-updater capable of deploying 5,000 nodes within an hour - a
 significant target for conventional malware even if it wasn't for the fact
 that some of these people really do need their privacy.

 If we run our own repository:
 - We need to maintain it. This is more unnecessary work.
 - We need to host it. This is more CPU usage on the small, cheap, rather
 limited VM that runs the website etc.

 But most importantly, we need it to be reasonably easy to *develop
 Freenet anonymously*. This is not a theoretical aspiration. There are
 anonymous developers today, and some of them are extremely productive at
 times.


 Some kind of Infocalypse bridge?



 Exactly what problem are you trying to solve here?



 It's really not that hard to build Freenet. Granted it should be easier;
 the immediate problem is you need not only freenet-ext.jar (which the build
 scripts will fetch for you if you set one line in a config file; the first
 time you run ant it will tell you this), but also the bouncycastle jar,
 which isn't auto-fetched.

 If you really want security advice ask nextgens. But it looks to me like
 Maven is hopeless for our purposes. For a non-security-related project, for
 a single developer who doesn't distribute the resulting binaries, fine. For
 a corporate setup where both the developers and the server are inside the
 firewall, fine. But for us, it does not make sense.

 Regarding not auditing dependencies, we do try to obtain clean copies
 of our dependencies. Also most of them aren't security critical, and so
 aren't updated regularly. Ordinarily this would be a bad thing - but it
 does reduce the number of opportunities for malware to slip in. The biggest
 dependency is db4o, and IMHO we should get rid of it soon, it's been
 nothing but a nightmare. Whenever we have looked into updating it we have
 found new and wonderful bugs, and so haven't bothered...

 In any case, the fact that we haven't audited every line of some of our
 dependencies is not an excuse for failing to perform basic due dilligence
 on our build process. Freenet is security sensitive, it has an
 auto-updater, it's not safe for us to just grab jars from wherever and hope
 for the best, which seems to be what most of the Java community do. And
 it's what Maven does too, without any form of authentication.

 The best person to ask for security advice on this sort of issue is
 Nextgens anyway. He's been around lately.



___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Pitch Black Attack - Analysis, Code, Etc.

2013-01-31 Thread Matthew Toseland
On Thursday 31 Jan 2013 19:24:27 Michael Grube wrote:
 On Thu, Jan 31, 2013 at 2:06 PM, Matthew Toseland t...@amphibian.dyndns.org
  wrote:
 
  On Thursday 31 Jan 2013 16:16:38 Michael Grube wrote:

So how exactly do you use the 0.037 constant?
   
If you don't have a peer with distance greater than 0.037 * (distance
  from
random location to nearest node to the random location), then you
  reset?
(This will break for nodes with very small degree...)
  
   No. It's not based on your immediate peer - a probe doing a DFS search
   returns the closest result to some key that is selected at random. If the
   closest node identifier to the randomly selected location is further than
   some distance, the node originating the probe needs to randomize its
   location.
 
  I don't understand. The node originating the search is not the victim. It
  doesn't have a wrong location. So why does this help at all?
 
 The node initiating the search very well could be a vicitm. Yes, if all of
 their peers are malicious then they are out of options, but assuming most
 of their trusted peers are not malicious Sandberg's algo works just fine.
 All nodes with malicious biases should be considered victims, IMO.

So when a proportion of the network have bad locations, and therefore there is 
a gap in the ring, you want an equivalent number of nodes to reset their 
locations and hopefully fill the gap? Isn't this going to be far less efficient 
than having *the nodes that are actually affected* reset their locations? I.e. 
it'll either do far too much resetting or not nearly enough?
 
  Oskar's proposal, as you quoted:
 
 From your notes in the bug tracker:

 Pick a key randomly, route for it with a special query that returns
  the
 nearest node identifier to the key found. If the closest you can get
  is
much
 further than your distance to your neighbors, give up your current
position
 for the random one.
 
  In other words, we use the probe to find out how far we *should* be from
  our neighbours. Then if we are much too close, we are probably the victim
  of an attack, so we randomise.
 
 So you're saying we need to calculate the ideal distance with the
 probes? That is not what I'm reading:

 The definition of much further needs to be
   determined
experimentally, but it shouldn't be an issue (since the attack in
   question
works by putting a neighbor thousands of times closer to you then it
   should
be).
 
 Are you proposing that we estimate network size by using the
 probes to find the average value of the distance from a randomly selected
 value and the closest actual result? How does this tell us what the ideal
 distance should be?
 
AFAICS Oskar's proposal is very clear, at least if it was reported correctly:

If (the distance to your neighbours)  (arbitrary constant) * (distance from 
probe), then reset.

'The distance to your neighbours'
- You don't use this at all. Hence your interpretation MUST be wrong.

'The closest you can get'
- The distance between the target location and the closest node from the probe.

'The definition of much further needs to be determined experimentally'
- This is arbitrary constant above.
- This is the tunable parameter. Which we will need to experiment with. It's 
obviously going to be a tradeoff between security and performance.

A corrupt network will likely have big gaps covering most of the keyspace. 
Routing to a random location will likely find us in one of these gaps. On the 
other hand, the average node should be within the area with normal-ish 
routing, even in this case - i.e. it should have mostly neighbours that are 
very close to it.

So usually the probe would return a large gap, while our neighbours are close 
by. And therefore we would reset.

But maybe I'm missing something important here...

I'm CC'ing devl since this is an important design discussion and there isn't 
anything confidential here.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Ian Clarke
On Thu, Jan 31, 2013 at 12:59 PM, Matthew Toseland 
t...@amphibian.dyndns.org wrote:

 On Thursday 31 Jan 2013 17:50:32 Michael Grube wrote:
  On Thu, Jan 31, 2013 at 11:36 AM, Ian Clarke i...@freenetproject.org
 wrote:
   This is despite the fact that no such compromise as ever occurred on
 any
   project that I'm aware of, and since we don't do code audits of
 Freenet's
   current dependencies, our current approach doesn't immunize us against
 it
   anyway.

 Have you actually tried to find out?


If by try you mean a quick
Googlehttps://www.google.com/webhp?sourceid=chrome-instantion=1ie=UTF-8#hl=entbo=dsclient=psy-abq=maven%20repository%20compromiseoq=gs_l=pbx=1fp=eba5ecb19bdd79c3ion=1bav=on.2,or.r_gc.r_pw.r_cp.r_qf.bvm=bv.41642243,d.b2Ibiw=1371bih=983search,
then yes.

If we run our own repository:
 - We need to maintain it. This is more unnecessary work.


Not a lot, probably less than dealing with the freenet-ext.jar mess.


 - We need to host it. This is more CPU usage on the small, cheap, rather
 limited VM that runs the website etc.


It won't use significant CPU or bandwidth, only developers will access it,
and Maven caches dependencies locally.


 But most importantly, we need it to be reasonably easy to *develop Freenet
 anonymously*. This is not a theoretical aspiration. There are anonymous
 developers today, and some of them are extremely productive at times.


They can use a Tor proxy.


 Exactly what problem are you trying to solve here? It's really not that
 hard to build Freenet. Granted it should be easier; the immediate problem
 is you need not only freenet-ext.jar (which the build scripts will fetch
 for you if you set one line in a config file; the first time you run ant it
 will tell you this), but also the bouncycastle jar, which isn't
 auto-fetched.


I'm trying to bring us into 2013, Maven is virtually a standard Java tool
these days.  freenet-ext.jar has to be built, has to be kept up-to-date.
It's basically an ugly home-grown dependency management solution.
 Originally there were no alternatives, but now there are, and there are
easy solutions to the problems that you've outlined with it.

Ian.

-- 
Ian Clarke
Personal blog: http://blog.locut.us/
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Matthew Toseland
On Thursday 31 Jan 2013 20:37:43 Ian Clarke wrote:
 On Thu, Jan 31, 2013 at 12:59 PM, Matthew Toseland 
 t...@amphibian.dyndns.org wrote:
 
  On Thursday 31 Jan 2013 17:50:32 Michael Grube wrote:
   On Thu, Jan 31, 2013 at 11:36 AM, Ian Clarke i...@freenetproject.org
  wrote:
This is despite the fact that no such compromise as ever occurred on
  any
project that I'm aware of, and since we don't do code audits of
  Freenet's
current dependencies, our current approach doesn't immunize us against
  it
anyway.
 
  Have you actually tried to find out?
 
 If by try you mean a quick
 Googlehttps://www.google.com/webhp?sourceid=chrome-instantion=1ie=UTF-8#hl=entbo=dsclient=psy-abq=maven%20repository%20compromiseoq=gs_l=pbx=1fp=eba5ecb19bdd79c3ion=1bav=on.2,or.r_gc.r_pw.r_cp.r_qf.bvm=bv.41642243,d.b2Ibiw=1371bih=983search,
 then yes.

They don't have to compromise the repository. All they have to do is spoof it, 
if we're using HTTP. Although as Michael pointed out it is possible to use 
HTTPS.
 
 If we run our own repository:
  - We need to maintain it. This is more unnecessary work.
 
 Not a lot, probably less than dealing with the freenet-ext.jar mess.

See below.
 
  - We need to host it. This is more CPU usage on the small, cheap, rather
  limited VM that runs the website etc.
 
 It won't use significant CPU or bandwidth, only developers will access it,
 and Maven caches dependencies locally.

Ok.
 
  But most importantly, we need it to be reasonably easy to *develop Freenet
  anonymously*. This is not a theoretical aspiration. There are anonymous
  developers today, and some of them are extremely productive at times.
 
 They can use a Tor proxy.

IMHO we should not force that on them. Tor has a different threat model, and is 
much easier to block. Whereas developing over Freenet, without using Tor at 
all, is quite possible right now, or would be if we maintained an official 
on-freenet git/hg repo (using tools that already exist). To be fair, existing 
anonymous devs do pull from the main repo via Tor, but IMHO we should not 
require them to do so.
 
  Exactly what problem are you trying to solve here? It's really not that
  hard to build Freenet. Granted it should be easier; the immediate problem
  is you need not only freenet-ext.jar (which the build scripts will fetch
  for you if you set one line in a config file; the first time you run ant it
  will tell you this), but also the bouncycastle jar, which isn't
  auto-fetched.
 
 I'm trying to bring us into 2013, Maven is virtually a standard Java tool
 these days.  freenet-ext.jar has to be built, has to be kept up-to-date.
 It's basically an ugly home-grown dependency management solution.
  Originally there were no alternatives, but now there are, and there are
 easy solutions to the problems that you've outlined with it.

No, Maven does not help with freenet-ext.jar at all. The end-user does not use 
Maven.

Including the dependency jars in the main freenet jar shipped is possible with 
or without Maven - except that it isn't for at least one jar, the Bouncycastle 
crypto provider, which needs to be bundled separately as it is signed. I'm not 
sure whether we could combine it if we signed the whole file, but even then 
we'd need a code signing cert for Java. We do need one for Windows, but IIRC 
you mostly have to pay separately for Java vs for Windows. For linux installs 
it's good for packages to be able to use the system version of bouncycastle 
(and other libraries), which is what originally motivated infinity0's work on 
splitting up freenet-ext.jar.

What does make a difference is the changes made to the auto-updater I made last 
year. These allow us to ship the bouncycastle jar, to update it, and to ship 
whatever other jars we need, updating them when we need to. We can split up 
freenet-ext.jar however we want (including using infinity0's branch).

But given that freenet-ext.jar changes *very* slowly, I don't see an urgent 
issue.

The most urgent issue related to this area is updating the wrapper, which can 
cause problems on Windows, but which is tricky to update because wrapper.jar is 
included in freenet-ext.jar, and needs to be compatible with the native 
binaries. Maven does not help here either.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

[freenet-dev] Revocation messages: request review of english text

2013-01-31 Thread Matthew Toseland
Some strings that users hopefully will never see. But if they do see them, they 
need to be clear. Any comments welcome.

RevocationKeyFoundUserAlert.text=The Freenet auto-update system appears to have 
been compromized! A trusted member of the Freenet team has uploaded a special 
signed message to Freenet to say that the keys for the auto-updater have been 
stolen, leaked, or somebody has them who shouldn't. We have turned off 
auto-update to prevent malware from being installed on your computer. Please 
check the website ( https://freenetproject.org/ ) for updates (if you can do so 
safely), but be careful as that may not be secure either. The thief might even 
have the keys for the message below, so please don't blindly follow 
instructions given without confirmation. Sorry we messed up!
RevocationKeyFoundUserAlert.textDetail=The message is: ${message}
RevocationKeyFoundUserAlert.textDisabled=The auto-updater has been disabled. 
This might be because of a local problem, such as running out of disk space, or 
the auto-updating system may have been compromized.
RevocationKeyFoundUserAlert.textDisabledDetail=The reason is: ${message}.
RevocationKeyFoundUserAlert.title=URGENT: The Freenet auto-update system has 
been compromised!
RevocationKeyFoundUserAlert.titleDisabled=The auto-updater may have been 
compromised! We have turned it off for now, please click for more details.

PeersSayKeyBlownAlert.fetching=Your node is attempting to download the 
revocation certificate to find out more details.
PeersSayKeyBlownAlert.failedFetch=Your node has been unable to download the 
revocation certificate. Possible causes include an attack on your node to try 
to get you to update despite the key being blown, or your nodes lying about the 
key being blown. Please contact the developers or other Freenet users to sort 
out this mess.
PeersSayKeyBlownAlert.connectedSayBlownLabel=These connected nodes say that the 
key has been blown (we are trying to download the revocation cert from them):
PeersSayKeyBlownAlert.disconnectedSayBlownLabel=These nodes told us that the 
key has been blown, but then disconnected, so we could not fetch the revocation 
certificate:
PeersSayKeyBlownAlert.failedTransferSayBlownLabel=These nodes told us that the 
key has been blown, but then failed to transfer the revocation certificate:
PeersSayKeyBlownAlert.titleWithCount=Auto-update key blown according to 
${count} peer(s)!
PeersSayKeyBlownAlert.short=According to some of your peers it is not safe to 
auto-update. Auto-update has been disabled until we know if they are not making 
it up.

RevocationChecker.revocationFetchFailedMaybeInternalError=The auto-update 
system has failed due to an unexpected error: ${detail}. This might be 
because the auto-update key has been compromized (e.g. the keys have been 
stolen), so we have turned off auto-update as it may not be safe. However it 
might also be due to a local problem such as running out of disk space. If this 
is true, please fix the problem and restart Freenet. If this message does not 
go away, please check the website ( https://freenetproject.org/ ) and seek 
help. It might be useful to try fetching the key manually, but bear in mind it 
might have been inserted by the person who stole the keys: ${key}
RevocationChecker.revocationFetchFailedFatally=The auto-update system has been 
compromized! The private key may have been stolen, so auto-update has been 
turned off permanently. The file that should explain what has happened cannot 
be fetched due to an unexpected error: ${detail}. Please try fetching the key 
manually (the key might have been inserted incorrectly e.g. be too big; for 
safety's sake we have to turn off auto-update straight away rather than wait 
for the whole key): ${key}


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Maven revisited

2013-01-31 Thread Ian Clarke
On Thu, Jan 31, 2013 at 3:34 PM, Matthew Toseland t...@amphibian.dyndns.org
 wrote:

But most importantly, we need it to be reasonably easy to *develop
 Freenet
   anonymously*. This is not a theoretical aspiration. There are anonymous
   developers today, and some of them are extremely productive at times.
 
  They can use a Tor proxy.

 IMHO we should not force that on them. Tor has a different threat model,
 and is much easier to block. Whereas developing over Freenet, without using
 Tor at all, is quite possible right now, or would be if we maintained an
 official on-freenet git/hg repo (using tools that already exist). To be
 fair, existing anonymous devs do pull from the main repo via Tor, but IMHO
 we should not require them to do so.


Now that I think about it, it may be possible to host a Maven repository in
Freenet…  AFAIK it's just straight-up HTTP GETs.


   I'm trying to bring us into 2013, Maven is virtually a standard Java
 tool
  these days.  freenet-ext.jar has to be built, has to be kept up-to-date.
  It's basically an ugly home-grown dependency management solution.
   Originally there were no alternatives, but now there are, and there are
  easy solutions to the problems that you've outlined with it.

 No, Maven does not help with freenet-ext.jar at all. The end-user does not
 use Maven.


Using Maven's assembly plugin - it's trivially easy to compile your code,
together with all dependencies, into a single .jar.

Ian.

-- 
Ian Clarke
Personal blog: http://blog.locut.us/
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Revocation messages: request review of english text

2013-01-31 Thread Ian Clarke
The term blown is jargon, I don't think we should use it.

All messages should give the user guidance as to what course of action to
take, and what the implications of the various messages are.

Ian.

On Thu, Jan 31, 2013 at 3:44 PM, Matthew Toseland t...@amphibian.dyndns.org
 wrote:

 Some strings that users hopefully will never see. But if they do see them,
 they need to be clear. Any comments welcome.

 RevocationKeyFoundUserAlert.text=The Freenet auto-update system appears to
 have been compromized! A trusted member of the Freenet team has uploaded a
 special signed message to Freenet to say that the keys for the auto-updater
 have been stolen, leaked, or somebody has them who shouldn't. We have
 turned off auto-update to prevent malware from being installed on your
 computer. Please check the website ( https://freenetproject.org/ ) for
 updates (if you can do so safely), but be careful as that may not be secure
 either. The thief might even have the keys for the message below, so please
 don't blindly follow instructions given without confirmation. Sorry we
 messed up!
 RevocationKeyFoundUserAlert.textDetail=The message is: ${message}
 RevocationKeyFoundUserAlert.textDisabled=The auto-updater has been
 disabled. This might be because of a local problem, such as running out of
 disk space, or the auto-updating system may have been compromized.
 RevocationKeyFoundUserAlert.textDisabledDetail=The reason is: ${message}.
 RevocationKeyFoundUserAlert.title=URGENT: The Freenet auto-update system
 has been compromised!
 RevocationKeyFoundUserAlert.titleDisabled=The auto-updater may have been
 compromised! We have turned it off for now, please click for more details.

 PeersSayKeyBlownAlert.fetching=Your node is attempting to download the
 revocation certificate to find out more details.
 PeersSayKeyBlownAlert.failedFetch=Your node has been unable to download
 the revocation certificate. Possible causes include an attack on your node
 to try to get you to update despite the key being blown, or your nodes
 lying about the key being blown. Please contact the developers or other
 Freenet users to sort out this mess.
 PeersSayKeyBlownAlert.connectedSayBlownLabel=These connected nodes say
 that the key has been blown (we are trying to download the revocation cert
 from them):
 PeersSayKeyBlownAlert.disconnectedSayBlownLabel=These nodes told us that
 the key has been blown, but then disconnected, so we could not fetch the
 revocation certificate:
 PeersSayKeyBlownAlert.failedTransferSayBlownLabel=These nodes told us that
 the key has been blown, but then failed to transfer the revocation
 certificate:
 PeersSayKeyBlownAlert.titleWithCount=Auto-update key blown according to
 ${count} peer(s)!
 PeersSayKeyBlownAlert.short=According to some of your peers it is not safe
 to auto-update. Auto-update has been disabled until we know if they are not
 making it up.

 RevocationChecker.revocationFetchFailedMaybeInternalError=The auto-update
 system has failed due to an unexpected error: ${detail}. This might be
 because the auto-update key has been compromized (e.g. the keys have been
 stolen), so we have turned off auto-update as it may not be safe. However
 it might also be due to a local problem such as running out of disk space.
 If this is true, please fix the problem and restart Freenet. If this
 message does not go away, please check the website (
 https://freenetproject.org/ ) and seek help. It might be useful to try
 fetching the key manually, but bear in mind it might have been inserted by
 the person who stole the keys: ${key}
 RevocationChecker.revocationFetchFailedFatally=The auto-update system has
 been compromized! The private key may have been stolen, so auto-update has
 been turned off permanently. The file that should explain what has happened
 cannot be fetched due to an unexpected error: ${detail}. Please try
 fetching the key manually (the key might have been inserted incorrectly
 e.g. be too big; for safety's sake we have to turn off auto-update straight
 away rather than wait for the whole key): ${key}

 ___
 Devl mailing list
 Devl@freenetproject.org
 https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl




-- 
Ian Clarke
Personal blog: http://blog.locut.us/
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl