I am seeing:

java.lang.NullPointerException
        at 
freenet.interfaces.BaseLocalNIOInterface.intAddress(BaseLocalNIOInterface.java:42)
        at 
freenet.interfaces.BaseLocalNIOInterface.hostAllowed(BaseLocalNIOInterface.java:194)
        at 
freenet.interfaces.BaseLocalNIOInterface.dispatch(BaseLocalNIOInterface.java:235)
        at freenet.interfaces.NIOInterface.acceptConnection(NIOInterface.java:105)
        at freenet.transport.tcpNIOListener.accept(tcpNIOListener.java:107)
        at 
freenet.transport.ListenSelectorLoop.processConnections(ListenSelectorLoop.java:106)
        at freenet.transport.AbstractSelectorLoop.loop(AbstractSelectorLoop.java:729)
        at freenet.transport.ListenSelectorLoop.run(ListenSelectorLoop.java:146)
        at java.lang.Thread.run(Unknown Source)


It looks new.

The lack of load that I thought went away, didn't.  Maybe others think
my node sucks?!  The pSearchFailed looks nice, from memory, something
like 1e-6 after 15 hours of uptime.

receivedData Ratio has gone up from 56-73% to 91%, though, this was
accomplished by dropping tries from 200-1300 down to 59.  Top success
went down from 918 down to 54.

sentData Ratio has gone up from 30-70% to 93%, though, at the same
type of cost, tries down from 980-4470 to 248 and successes down from
260-2720 to 231.

messageSendTimeNonRequest has improve from 4s-70s to 26.7ms.
messageSendTime has improve from 8s-48s to 25ms.

I had:

defaultResetProbability=0.002
cacheProbPerHop=0.002

in my config to try and unload my node a bit.  Maybe that worked too
well.  I'm removing those.

Maybe I'll let my node sit for a while to smooth out and see where it
winds up.

I'd still like to see a DF table run out of the store.  If the node is
loaded, give out references to others that had the data or that we
successfully sent the data to with some probability, maybe 20%
initially.

I don't think this is any worse than connecting to 2000 nodes at a
time and doing an HTL=1 breadth first search for data.  Sure, you can
find out everyone that has data, but one can already do that.

One can time it out after n days without a hit.  The main idea is to
try and get more work out of freenet without consuming any more
bandwidth, to increase the odds that we can find hard to find data and
to push work to the edges.

The reason why this will work is that freenet is so bandwidth limited
that we can store the complete routing solution for all data in
freenet without running out of disk.  If freenet ever improves on
efficiencies, then we will tend to run down the store and we then can
fall over to normal routing.

It has been suggested that freenet spends 80% of its resources
(upstream) on routing, if true, this may speed freenet up by 4x within
a short period of time, maybe within a week.  (Side note, I'd love to
see fred report the real `overhead' cost.)

The table can be a fixed % of the store size (say 5-10%).  With my
store, that 10% would be 93M keys.  Just how big is freenet anyway?
Anyway, asumming that every node is just like mine, that's enough room
for complete information on 2400 hosts.

What's the biggest risk, it will cover up serious flaws in the routing
algorithm, should we ever need to use it.  This doesn't matter, as
disk is growing faster than network upstream costs.  We need to
optimize for that at the expense of disk.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to