On Thursday 09 April 2009 10:43:32 bo-le at web.de wrote:
> > -----Urspr?ngliche Nachricht-----
> > Von: "Matthew Toseland" <toad at amphibian.dyndns.org>
> > Gesendet: 09.04.09 00:28:48
> > An: Discussion of development issues <devl at freenetproject.org>
> > Betreff: [freenet-dev] Client apps limiting load in bad ways
> 
> > FMS does bad things to limit load: Rather than queueing every key at 
> > MaxRetries=-1, it does all the scheduling itself, doing simple ClientGet's 
> > with no maxretries and no priority (hence prio 2), in order to enforce a 
> > maximum number of parallel requests. Other client apps probably do similar 
> > things. MiniChat presumably does the opposite, constantly requesting keys, 
> > starting a new request for the same key when the last one has finished 
> > (saces: is this true?).
> yes. the code 'emulates' a (passive)request with short timeout (a few 
minutes) ;)
> 
> while (currenttime() < timeout) {
>   sendrequest();
>  if sucess break;
>  wait(500);
> }  
> 
> > 
> > The reason that FMS's behaviour is a problem is that ULPRs rely on nodes 
> > knowing which keys they are interested in. ULPRs are the main network 
level 
> > optimisation for SSK polling/chat/WoT apps, so this is potentially a big 
> > source of problems - excessive load, slow propagation of messages etc.
> > 
> > Possible solutions:
> > 
> > DontFetch=true : if set, a request would be purely passive: no requests 
would 
> > be started as a result of that request, but the node would know it wants 
> > those blocks, and if the blocks do come in, the request would progress. 
> > Suggested usage is that a client app would set up DontFetch requests for 
> > everything it is interested in, with Verbosity sufficiently high, and then 
> > poll using whatever load heuristics they want. When a block (typically an 
SSK 
> > block when polling SSK outqueues) is found, either the request succeeds, 
> > fatally fails, or it is a redirect. If it is a redirect, the client would 
get 
> > a SimpleProgress and would schedule a normal request to fetch the rest of 
the 
> > key. (This should be fairly easy to implement)
> >
> > Client configurable cooldown: Running a request with MaxRetries=-1 means 
> > fetching it 3 times every half hour. We could make the half hour bit 
> > configurable, and maybe the 3 times too. Note that it is quite likely that 
we 
> > will eventually impose network level limits on polling (e.g. using 
> > RecentlyFailed), so setting this low will not necessarily (in the long 
run) 
> > yield better performance. (This should be moderately easy to implement)
> > 
> > Any other ideas? Is this the right approach? IMHO we can implement 
whatever 
> > API fairly easily, but maybe there are some other ideas?
> 
> If everybody is listen a key with DontFetch=True, the key/data will be known 
along the regular insert chain,
> but how is this further spreaded?

Via ULPRs. Any node which has recently requested the key, or which we've 
routed a request for it to, will be offered the key, and if it wants the key, 
or in some cases if it has peers that want the key, it will fetch it, and 
further propagate it. This means tracking sensitive information (who has 
asked for and who we have sent requests to for a specific key) for 1 hour, 
after that we delete it.
> 
> MfG
> saces
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090409/962720b9/attachment.pgp>

Reply via email to