Salah Coronya wrote:
Well so far routing doesn't seem to have improved, most requests are failing (51051 requests attempted, 409 succeeded; 514 inserts attempted, 7 succeeded). About 14000 qph here.
Thats disappointing - anyone else seen any change, positive or negative, with recent builds? Most freesites seem to be retrievable for me, but
I've seen a big increase in RNF's since backoff. My Search died probabilities are now all bad 0.999-1.0. I used to normally see a few with sub 0.9 values.
FEC downloads are still much much slower than they used to be (although this may just be because I am downloading very old splitfiles).
Its been brought up several times the reason NGR might not appear to
be working is because the network is oversaturated (you can't fit an elephant through a straw, no matter how hard you suck. No matter how good NGR is, no routing scheme is going to help if there nowhere for
the data to go because everyone's link is saturated). I propose in
the "unstable" (or maybe re-opening the "experimental") branch, FCP bandwidth/connections be throttled to an artificially low number.
I think the whole overloading thing is a red herring, we have already devoted considerable energy to this and I think the exponential backoff is doing a good enough job of addressing this (anyone got contrary evidence?).
I would guess what I see is down to it balancing better but not doing any good for specialisation.
I think the underlying problem is still routing. One possibility is that the Estimator algorithm just isn't very good at estimating (this could be due to the increased sensitivity at the outset leading to an essentially random estimation curve).
This theory can be tested by recording response time information for a given node, and then feeding this through the estimator algorithm, seeing how well they do, to optomise their various parameters.
The worst case scenario is that we impose a forced specialization, clearly this is undesirable as specialization *should* occur naturally, but we may need to give it a kick-start.
I would suggest asking for 8/16 of the big nodes to delete a fraction of their datastore (at the same time) and see what happens. You can then decide if it's a good way to kick-start any future changes (assuming the current doesn't work) or if a strict(er) coded QR/DNF based approach will be required.
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
