I'm sorry for breaking into the normal development conversation, but this is
something that I feel may be useful for people to look at for a few reasons.

There have recently been many people commenting on how long freenet is
taking to get to 1.0 and other issues about splitting the network (which I'm
not going to pass judgment on here.

A few things to remember: freenet has had at most 6 very active developers
at a time. At other times, it goes down to 1 (or zero).

Note: I may be off my a version on some of these
Freenet 0.1: before my time, sorry
Freenet 0.2: routing. no key encryption
Freenet 0.3: added SVK, SSK, KSK, CHK. Node to node encryption. fproxy
Freenet 0.4: state machine, new DS, ARKs, heisenbug and fix, fcp
Freenet 0.5: native DS, nio, ngr, open connection only routing, splitfiles
with FEC, distribution servlet, fproxy filter

Each of these major versions has had just that... major changes. freenet
development has not been simple. It has included much discussion on
cryptography from some very smart people, and some robust implementation by
some very good programmers (under varying amounts of intoxication :)
Freenet is also very large. It puts stresses on the jvm which may be
unparalleled by all other java programs. Countless JVM bugs have been worked
around. It has been heavily optimised in areas due to high cpu usage. It is
also very difficult to debug using a debugger, so it depends on log files
(which are often multiple gigabytes) to trace what caused a problem (if it
even shows up in the log!) As such, the fact that there are remaining bugs
should not come as a shock. Be shocked instead at how many bugs have been
fixed :)

freenet still has a long way to go. It'd be hard for any of us to say that
any version of freenet works. All versions of freenet either use a large
amount of cpu, memory, bandwidth, threads, or just stall out and do nothing
given time. Freenet should *not* have to be restarted every few days to
start working again. It should not forget how to route, or just stop
accepting incoming queries. Data should be insertable and retrievable
without humongous time delays. Data should be findable.



What we should look for in upcoming freenet builds:

node's not hammering each other into the stone age

working fcp (still word out on fiw). Is it possible for someone to look at
what behavior exactly is causing it so it is either easily reproducible? It
would be much more useful than just saying that it doesn't work and
expecting toad to find and fix it

muxing. Handling multiple sends/receives over a single (or few) connections.
This would reduce quite a bit of load, from cpu, to threads and memory.
PeerHandler is a step towards this goal.

new network protocol for restarts. Reduces cpu load for opening a connection
that was recently open

more bugfixes.


Some of the additions to freenet were not originally included in Ian's
original paper. As the node has grown, refinements have occurred to the
original paper. NIO halts the 'one socket, one thread' approach to thread
management. NGR tries to figure out fastest source for the data. muxing
lowers cpu usage further. The new DS and the native DS fixed leaky files.
The keytypes prevented data tampering. FEC allowed files to fall out of
freenet but still allowed freenet to handle large files. FCP allowed clients
to speak a simpler language and not have to deal with a lot of the crypto.
ARKs allow nodes to change IPs and still be routable (in theory :). And the
distribution servlet removes the centralisation in bootstrapping a new node.

Some of the additions may even break the network (new restart protocol).
However they are needed and the network is not healthy without them. These
expansions of the original idea are bring closer to what it should be. An
anonymous, distributed, peer-to-peer publishing/data network.

-Mathew (and now back to your originally scheduled programming)

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to