On Saturday 27 September 2008 17:10, Matthew Toseland wrote:
> On Saturday 27 September 2008 15:41, Michael Rogers wrote:
> > On Sep 27 2008, NextGen$ wrote:
> > >If someone you don't trust has physical access to your computer you are
> > >doomed in any case... whether freenet is running or not when he gets his
> > >hands on the keyboard doesn't change anything.
> > 
> > It's worthwhile to protect against casual attackers even if you can't 
> > protect against determined attackers. For example I have a password on my 
> > laptop, even though someone *could* pop the case open and clone the hard 
> > drive.
> 
> Laptops are a *big* problem for Freenet. Right now there are probably more 
> laptops than desktops; in the future there will be vastly more laptops than 
> desktops. Laptops suck:
> 1. Laptops have low uptime. Much less than the 24x7 we would ideally require 
> for a stable network, effective downloading, etc.
> 2. Laptops are frequently used where there is no internet connectivity. 
Okay, 
> we can cope with this.
> 3. Laptops are frequently used on trains, in cafes etc, where the internet 
> connectivity is slow, throttled, possibly metered, NATed with no possibility 
> of port forwarding, and with IPs changed very frequently (i.e. after a 
couple 
> of hours when the train reaches its destination). This means that the user 
> probably doesn't WANT to run Freenet, and also that it will perform poorly 
> and find it difficult to connect to darknet peers.
> 4. Double NAT is more common with laptops even when used at home: wired 
router 
> connected to wireless router.
> 
> To be more specific:
> 
> On darknet, it may be very difficult to achieve a connection to your peers. 
> Even if the node does manage to connect, there's a good chance they're not 
> online. IMHO to make a pure darknet consisting mostly of laptops work well 
we 
> will need to:
> 1) Make requests work on a store-and-forward basis, passed on in prioritised 
> bundles as different parts of the network come back online.

Sorry, I'm missing something important here. Store and forward is only half 
the picture. The other half is persistent subscriptions. So what we're 
talking about here is true passive requests, aware of network churn and 
rerouted when a better destination comes online. Throw in bloom filters for 
adjacent peers on top and you have something that ought to work reasonably 
well: When I'm online, I send a bundle of requests to my peer, I go offline, 
the data trickles back to him, then I come back online and collect the data.

Of course there are tricky tradeoffs with e.g. rerouting while your uplink is 
down vs load cost of rerouting every time we disconnect etc, and we'd need a 
new load management scheme (maybe some form of token passing). There's a lot 
of design to fill in before we can think about simulating it.

> 2) Make swapping work in a similar way - not only taking into account nodes 
> that can reasonably be expected to check in within the next say 24 hours, 
but 
> also we will need the swapping mechanism itself to tolerate high downtimes.
> 
> Of course, this is precisely how I would like the network to evolve in the 
> long term in any case, because *this is how we will beat internet blocking* 
> and run on intermittent transports such as sneakernet.
> 
> Even on opennet, high downtimes will be a severe problem, because data will 
> not be reachable at the time that the user wants it. One solution to this is 
> to have a *massive* amount of redundancy. To some degree we already do that: 
> a block will be duplicated an average of 3 times according to simulations 
> (but frequently a lot more than that), and then we have FEC - twice the 
> number of blocks as strictly needed are inserted, such that any 128 out of 
> 256 are sufficient to reconstruct a segment. The other solution is to make 
> offline data available at a delay, as described above. 
> 
> But there is another problem on opennet: reconnecting. At the moment, we 
have 
> a limit of 20 peers (mostly to try to avoid prejudicing the network too much 
> in favour of opennet). After 5 minutes of downtime we drop opennet peers. If 
> they want to reconnect later, and we are port forwarded so we see the 
> incoming packets, and we need the peer, we may let them in; we keep 50 old 
> opennet peers for this purpose. I suspect that in practice this doesn't 
often 
> work, and we reseed instead, which gets us connections in roughly the right 
> area. But reseeding is a central bottleneck: we may be able to increase the 
> capacity, but it's always going to be a choke point.
> 
> In conclusion, in the short term laptops suck. In the long term, we need to 
> move away from this 24x7 real time connectivity myth, because it's 
> increasingly bogus except for servers. And we *don't* want to rely on geeks 
> running servers. That does mean that large or unpopular content will have to 
> be fetched over a period of days or perhaps longer. But if we manage it well 
> there could be a huge amount of content reachable in what for decentralised 
> filesharing purposes is a reasonable amount of time IMHO.
> > 
> > Cheers,
> > Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080927/644c7442/attachment.pgp>

Reply via email to