Ok, its run for a while now...

Good points, the incomingInserts are up from 100-400 to 800-850 per day.  This
might be more due to tuning than anything else.

incomingRequests are up from 12k-48k to 61-68k.  Again, maybe because of tuing.
pcacheAccepted up from 100-430 to 560-640.
sendingReplyHTL observations up from 200-1500 to 1800.
outboundAggregateRequests are down from 188k-433k to 192k-203k.
messageSendTime is up from 129-17k to 21k-46k.
messageSendTimeNonRequest is up from 5k-30k to 30k-66k.
successSearchTime is up from 7k-32k to 30k-53k.
requestSuccessRatio successes are up from 71-1170 to 2500-4400.
requestSuccessRatio Ratio is up from 0.006-0.035 to 0.03-0.07.
receivedData successes are up from 127-547 to 814-918.

So, despite what I said earlier, things appear to be moving through
the network, my node seems to be able to handle it, though, latencies
are climbing.  The other worrisome point is the low
requestSuccessRatio.  Even at 7%, this seems kinda low.  If we dropped
inbound requests by 93%, would this climb near to 100%?  Would that be
better?  Even sentData sucks, at 26% to 61% it seems kinda low.  Seems
like this should be closer to 98% or higher.  Is this a measure of how
bored people get waiting for data?


YAWaCI: Inbound requests have a 20-24% acceptance rate.  We need to
train faster up on what the right number of requests to generate for
neighbors and smooth them out.  Upon connection start up, assume we
can issue large amounts, as we do now, then as QR come in, we throttle
them down.  The goal would be to get them up into the 98% or better
range.  The extra 2% is so that we can quickly ramp up the request
load, should the node become unswamped.  When this happens the
requests will travel farther without being dropped and the
inefficiencies should drop and the goodness should increase.  Also as
this is done, we don't need to get rid of QR, as they would be a small
amount of data and a small amount of the bandwidth.

The outstanding question would be, is depth first searching better
than breadth first?  Maybe not.  I don't know.

What prevents a query from bouncing back through our node?  Seems like
if left unchecked, we could have traffic that just loops aruond in
small circles eating bandwidth and not being productive?  Having a
stop table just like the failure table that instead fails inbound
queries that we've seen recently might be good.  This way, the other
node will redirect the query someplace new instead of someplace it has
been before.  Time to live, 30 minutes (or maybe 4 days), unless we
get a DF message, in which case, it turns into a stop and return the
DF immediately.  This can be run out of the store, don't want to burn
core for this.  I don't have it.

Also, consider dumping maximumThreads down some, say to 40 to limit
latencies and memory requirements.

And the last bit of good news, I can click on YoYo and even though it
isn't in my datastore, fred can find it and get it in within the 60
seconds meaning my browser doesn't time out on the fetch.  I can
repeat with New, and it also works.  Things are improved.

I can start up a wget with a timeout of 600 and it feels like it will
manage to fetch anything, well, except for the rm -rf 60G store that
causes stuff to go missing, never to return tafe/3 is a good example
of such things, but that is to be expected (I guess).  Actually, with
good routing, I wonder if all of the content could be found anyway.  I
could manage to keep all of freenet on disk, disk is cheap, bandwidth
is expensive.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to