On Monday 22 Jul 2013 19:00:03 Victor Denisov wrote: > >> 1. Paying for becoming a "VIP" Freenet node is not out of the > >> question (people buy invites to elite torrent trackers for sizable > >> amount of money), but the benefits must be *very* obvious. > > > > There's no point if it's only the handful of elite nodes. It needs to > > be the bulk of the network - everyone who routes requests where they > > could conceivably spy on other nodes. The benefits could be made > > obvious though: If we have a high bandwidth threshold then we'll have > > much higher average transfer rates, and people will have less need to > > hack their nodes to have 500 peers, as the Japanese are doing on > > Frost after Winny and Perfect Dark fell down. > > "Elite" trackers have tens of thousands of users. Granted, most of those > obtained their invites for free; but a fair amount had paid anywhere > between $5 and $100 for such invites.
Meaning that some people may be willing to pay to get on opennet. The catch is, how many? If it reduces the rate of users joining the network by a factor of 2, for example, the network will shrink, even if we were growing slowly to start with. One problem with the yubikey thing is it takes time to deliver them. Hence the need to be able to just buy an invite e.g. with BTC. > > > There isn't anything else. Except darknet. And everyone keeps telling > > me that darknet is impossible, at least until the network is much > > bigger. > > We (as a community) just hadn't figured out such a way, IMO. Perhaps it > doesn't really exist - I'm not qualified enough to tell yet - but I > highly doubt it. On the level of tunnels - what I would consider real security - it's a major unsolved problem in academia. (Tunnel setup on DHTs) > > > On opennet, your peers choose you. Hence MAST and connect-to-everyone > > surveillance. On darknet, you choose your peers. > > Umm, would there be any benefit to security if this was reversed? If > Freenet will go down the "tangible verification" route (certificates, > yubikeys, whatever), hiding the fact that you run a Freenet node would > become pointless anyway. I don't follow? > > > If you trust your friends less than you trust the jack-boots, then > > you probably don't have much to fear from the jack-boots. > > As I'd mentioned in a previous email, it's not a question of trust as > is, it's a question of damage my friend can do to me if he decides to > betray my trust. No. If they can prove that a given IP address at a given time uploaded a given illegal file, game over. Or downloaded it, but uploaders are worth more. That's the only safe assumption. > > >> 4. I think that performance issues *absolutely* should be handled > >> before anything else, even before security. I understand that many > >> - even most - will disagree with me, but if I found *one* thing > >> from practice, it is that people widely prefer less secure, but > >> working, systems to more secure, but non-working, ones. > > > > Right up until the point when somebody publishes a toolkit for MAST, > > and a list of paedophiles they busted with it. > > That's sort of a catch 22. The network won't be "worthy" of growing > until it's secure, but it won't be secure until it grows. I'm not saying > security isn't important; that would be stupid. But for an > *experimental* network *under heavy development* lightly secure, but > well-performing, network will be preferable to very secure (until the > next attack vector is found), but poorly-performing, network. A one million node network would still be vulnerable to MAST, and connect-to-everyone would be quite affordable for small to medium sized corporate attackers. > > >> Right now, Freenet exhibits a level of performance which can only > >> be called "abysmal". I can download torrents at 4 MB/s, reliably, > >> one after another, from different trackers in different countries; > >> considering that in Freenet mine (and everyone's else) traffic > >> should pass through several nodes (say, 20 of them, worst case), > > > > Typically for requests it should be 5-7 or thereabouts. > > Yes, that's why I'm assuming 20 as worst case. > > >> I'd say Freenet should provide around 200 KB/s of sustained > >> download performance (with the rest of my pipe being donated to > >> other nodes, thus hiding my traffic). In reality, in my tests, on a > >> lightly-loaded and well-integrated node I'm lucky to see speeds > >> above 10 KB/s, with "typical" downloads making 2-3 KB/s on average, > >> start to finish. My node with 90 peers only consumes around 200-250 > >> KB/s (out of 1 MB/s allocated); my higher bandwidth allocation is > >> effectively *wasted* by the inefficient network. > > > > Most nodes have relatively low bandwidth limits. We could boost > > performance by excluding slow nodes (say under 40KB/sec). This is the > > first part of the proposal. Of course we'd lose a large number of > > nodes - but we'd probably gain more to compensate when performance > > improves. > > I still don't get why it happens. Are torrent clients run by > significantly different slice of users? Again: I can reliably > demonstrate speeds of *at least* 500 KB/s for any torrent with 10+ > peers; with 20 peers, 90% of torrents, taken from all over the world, > will max out my internet connection. This indicates average upload > speeds of at least 50 KB/s, and probably closer to 100 KB/s (and with > most connections being asymmetric in one way or another, we can safely > assume that upload bandwidth is the limiting factor). Because torrent traffic is more bursty than Freenet? I.e. most of the time they are idle? > > >> If another major rewrite of Freenet is ahead (which, I'd argue, is > >> long overdue), I'd be happy to provide more input (i.e., I think > >> that filesharing and social communication is *much* more important > >> than keyword search and site publishing), but I feel this email is > >> already too bloated :-(. > > > > Filesharing implies keyword search, no? At the very least it requires > > working forums. > > Filesharing implies content indices of one sort or another, that's true. > But it doesn't imply, for example, spidering for content, like the Web > does. Also, I was talking about a longer timespan - i.e., the decision > to implement a keyword search for freesites, which, as far as I can see, > *still* doesn't provide anything approaching a useable system (i.e., on > my machine I routinely run out of memory if starting more than one > search at a time), was made in what? 2008, IIRC? There has been some progress on that, and a new index. > > > Feel free to rewrite Freenet, but I won't be around to do it. > > Dropping db4o, introducing new peer tiers, fully reworking bootstrapping > - it already seems like a major rewrite of the key architectural pieces. > I personally see no harm in taking a good hard look at other pieces at > this time - and either getting rid of them, or reworking them, given the > opportunity. I don't have time to implement all the things I want to do to Freenet. That was one of the original motivations behind this thread. However, "darknet is infeasible and opennet is hopelessly insecure" is now an equally important issue. > > Regards, > Victor Denisov.
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Devl mailing list Devl@freenetproject.org https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl