IMHO Freenet will always use more power than a single centralised server, or a 
network of centralised servers, assuming that the internet infrastructure 
itself is reasonably efficient. The reasons follow:
- One server may use a lot of power, but it can serve a very large number of 
clients.
- Freenet is designed for high uptime, hence for good performance we end up 
with end-user computers running for more hours than they would otherwise have 
run for.
- Freenet is fairly resource intensive on the end user machines, particularly 
on CPU usage (=power), disk access (=power), and memory (which may not be 
significant from a power standpoint).

So I'm afraid your chances of claiming carbon credits for installing Freenet 
on lots of nodes are pretty slim. :|

Having said that, I believe the uptime problem *may* be resolvable in the 
medium term, and the resource usage problem probably isn't very significant: 
the CPU cost will decrease over time, and the disk cost is marginal anyway.

On Friday 03 April 2009 04:36:09 B?rto ?d S?ra wrote:
> PS: one thing that would help in the process is the average amount of files
> the system is storing and the average number of online nodes. There is no
> need to state anything about document content/nodes' geographical
> distribution, so I suppose this stuff can be published without any breach to
> individual privacy.
> 
> I don't think that the actual content of data (textual, multimedia or
> whatever else) can have any impact on energy consumption, so I guess we
> don't need any such figure. Pls correct me if I'm wrong.
> 
> >From a CO2 saving POV download speed is not an issue, unless we say that a
> machine will stay UP just because of the dnload process. I can hardly
> imagine that happening, though. We all run dnloads in the background, while
> we do other things. What I see is more like a bandwidth/CPU sharing model,
> as in grid computing (like
> http://en.wikipedia.org/wiki/World_Community_Grid). Under this
> approach, the one and only key factor is whether freenet can
> easily adapt to changes in topology and use those resources that are present
> at the moment.
> 
> I have no notion whatsoever of how topology is managed and of the implied
> security treats, so I will not make any (most probably) idiotic
> suggestion.As far as I can see, my freenet connections are constantly
> reorganizing themselves... so WHY an extended uptime is so much better? I
> tend to understand this impacts on my response time only. Kind of "the
> longer I've been online, the better my node could explore the network to
> find quick and stable connections". So if I have a reduced or occasional
> uptime all that happens is that I'm slower than I could be. Am I correct?
> And if I am correct... is there any way to benchmark "how much slower" it
> gets, and how quick it gets better?
> 
> Berto
> 
> 2009/4/3 B?rto ?d S?ra <berto.d.sera at gmail.com>
> 
> > Hi Matthew!
> >
> > In sight of a number of things that are happening at international level I
> > think there is one issue that needs a close watch: CO2 saving. By having 
an
> > application to run over freenet we manage to do without big servers. This 
is
> > a huge saving, as huge backbone servers make a LOT of CO2. If you can
> > achieve a network that can make a lot of reduced uptime this means tons of
> > CO2 savings that can be certified and cashed, down to individual level.
> > Obviously enough, we will need some numbers (average emission of a home
> > machine vs emission of a public dedicated WWW server) to reach that stage,
> > hours of uptime and numbers of machines needed to reach the same effect,
> > etc.
> >
> > This has a deep impact on project financing. If you and I can prove we
> > saved 1 tonn of CO2 emission by using freenet we get cash for that, and al
> > the users get their share. Needless saying, money is often a much more
> > powerful engine than ideology, so this may dramatically impact on the
> > diffusion and employment of your technology. And if it saves CO2 emission,
> > the more it's used, the more all users (and you) cash in.
> >
> > Do we have any data that can serve for a start-up computation of the
> > savings implied? If we identify several critical factors I guess net stats
> > and basic benchmarks can be used to make a public chart of the ongoing
> > savings. This will need certification and once certified it will
> > automatically turn into cash. As a first term of comparison, one can say
> > that an average 24*7*52 www server (with some 430Wh consumption) makes
> > ~3,8MWh, for an equivalent discharge of 2,3 metric tons of CO2. The nodes
> > that run on freenet shall not be taken into account at all, as they have 
the
> > same emission that people reading WWW data would have anyway (IF AND ONLY 
IF
> > we require the same uptime for these nodes).
> >
> > Now let's have some basic math: the penalty a government pays for just ONE
> > such server is stated in 100 ? + inflation rate/missed ton of
> > CO2equivalents. So it's 230?+inflation per year, per server.
> > If we aim to take away 1 hundred servers we already deal with a financial
> > revenue of 23K? a year, plus inflation. Once clearly documented and
> > certified, this becomes a key factor for maketing freenet to lots of
> > countries who desperately need to buy CO2e certificates for the 40% of 
their
> > top industries, as it doesn't absolutely matter where they get their
> > certificates from.
> >
> > I'ma ware as anyone that loads of alternative models for CO2e savings have
> > already been produced. Yet they all imply some kind of serious HW change 
at
> > mass level: see http://www.netvoyager.co.uk/general/wtc.html Maybe a thin
> > client is the best thing on earth, yet reality is that we all have normal
> > PCs. If freenet can produce a way to use these PCs to produce CO2 savings
> > without requiring the users to spend any money, then it can be easily
> > marketed.
> >
> > Any data I can use to make some energy computations? I'm in the process of
> > applying for funding and I really need to support my claims. Privacy is
> > fantastic (and sure enough nobody needs to drop it because of this) but if
> > freenet can prove worth an investment from more than one point of view it
> > will be much simpler to keep things going.
> >
> > hmmmmmm you say freenet does not "burst". Can you elaborate in detail? 
This
> > may well mean that we can make lighter computation of 
consuption/efficiency,
> > since the consumption rate seems to be independent from what a user 
actually
> > does.
> >
> > Berto
> >
> > 2009/4/2 Matthew Toseland <toad at amphibian.dyndns.org>
> >
> >> Freenet can never compete on speed with traditional peer to peer, for
> >> several
> >> reasons, of which at least 1 is intractable:
> >> 1. Freenet assumes high uptime. This does not happen in practice, at 
least
> >> not
> >> for the mass market. To some degree we can resolve this with e.g.
> >> persistent
> >> requests in 0.10.
> >> 2. Freenet returns data via intermediaries, both on darknet and opennet.
> >> This
> >> is what makes our caching model work, and it's a good thing for security,
> >> however broadcasting a search (or using some more efficient form of
> >> lookup)
> >> and then having those nodes with the data contact you directly will 
always
> >> be
> >> faster, often much much faster. Caching may well obviate this advantage 
in
> >> practice, at least in the medium term.
> >> 3. Freenet has a relatively low peer count. Hence the maximum transfer is
> >> determined by the output bandwidths of the peers, which is low. 
Increasing
> >> the number of peers will increase various costs, especially if they are
> >> slow,
> >> and make it harder to see whether the network can sclae, otoh it would
> >> increase maximum download rates...
> >> 4. Freenet avoids ubernodes. Very fast nodes are seen as a threat,
> >> rightly,
> >> because over-reliance on them makes the network very vulnerable.
> >> Practically
> >> speaking they may be attacked, if this is common it again neutralises 
this
> >> advantage of "traditional" p2p.
> >> 5. FREENET DOESN'T BURST.
> >>
> >> The last is the fundamental, intractable issue IMHO. Freenet sends
> >> requests at
> >> a constant rate, and exchanges data between peers at a roughly constant
> >> rate.
> >> On something like Perfect Dark (which admittedly has much higher average
> >> upstream bandwidth and bigger stores), you start a request, and you get a
> >> huge great spike until the transfer is complete. It's similar on
> >> bittorrent,
> >> provided the file is popular. On Freenet, our load management is all
> >> designed
> >> to send requests constantly, and in practice, up to a certain level, it
> >> will
> >> use as much bandwidth as you allow it. We could introduce a monthly
> >> transfer
> >> limit as well as the upstream limit, but this would not help much, 
because
> >> bursting is inherently dangerous for Freenet's architecture. If you are
> >> Eve,
> >> and you see a big burst of traffic spreading out from Alice, with tons of
> >> traffic on the first hop, lots on the second, elevated levels on the
> >> third,
> >> you can guess that Alice is making a big request. But it's a lot worse
> >> than
> >> that: If you also own a node where the spike is perceptible, or can get
> >> one
> >> there before the spike ends, you can immediately identify what Alice is
> >> fetching! The more spiky the traffic, the more security is obliterated.
> >> And
> >> encrypted tunnels do not solve the problem, because they still have to
> >> carry
> >> the same data spike. Ultimately only CBR links solve the problem
> >> completely;
> >> what we have right now is hope that most of the network is busy enough to
> >> hide traffic flows, but this is the same assumption that many other
> >> systems
> >> rely on. But big spikes - which are necessary if the user wants to queue 
a
> >> large download and have it delivered at link speed - make it much worse.
> >>
> >> There are lots of ways we can improve Freenet's performance, and we will
> >> implement some of the more interesting ones in 0.9: For example, sharing
> >> Bloom filters of our datastore with our peers will gain us a lot, 
although
> >> to
> >> what degree it can work on opennet is an open question, and encrypted
> >> tunnels
> >> may eat up most of the hops we gain from bloom filters. And new load
> >> management will help too when we eventually get there. However, at least
> >> for
> >> popular data, we can never achieve the high, transient download rates 
that
> >> bursty filesharing networks can. How does that affect our target audience
> >> and
> >> our strategy for getting people to use Freenet in general? Does it affect
> >> it?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20090406/68e61764/attachment.pgp>

Reply via email to