On Friday 15 May 2009 17:05:09 Robert Hailey wrote:
> 
> On May 14, 2009, at 5:06 PM, Matthew Toseland wrote:
> 
> > On Thursday 14 May 2009 20:36:31 Robert Hailey wrote:
> >>
> >> On May 14, 2009, at 12:17 PM, Matthew Toseland wrote:
> >>
> >>> Once connected to my node, it repeatedly RNFed on the top block of
> >>> TUFI.
> >>> Performance with one peer is expected to be poor, but it is worse
> >>> than IMHO
> >>> it could be. Some sort of fair sharing scheme ought to allow a
> >>> darknet peer
> >>> that isn't doing much to have a few requests accepted while we are
> >>> rejecting
> >>> tons of requests from other darknet peers and opennet peers. (#3101)
> >>
> >> I second that, but I'm not sure as to the best implementation.
> >>
> >> On the surface this appears to be the same issue as balancing local/
> >> remote requests. i.e. if your node is busy doing everyone else's  
> >> work,
> >> your requests should take clear advantage when you finally get around
> >> to clicking a link.
> >
> > Possibly. It is indeed a load balancing problem. Queueing will help,  
> > or maybe
> > simulated queueing.
> >>
> >> I think this conflicts with the current throttling mechanism; piling
> >> on requests till one or both nodes say 'enough',
> >
> > Is this how it works now?
> 
> Yep. To the best of my understanding...
> 
> Q: How do we determine if we are going to take a remote request?
> A: If there is "room" to incur it's expected bandwidth.
> 
> Premise-1: all chks transfer at an equal rate
> 
> Result-1: new transfers squeeze bandwidth from other transfers.
> Result-2: a node will accept any number of transfers until such a  
> point that all of them would go over the maximum allowed transfer time.
> 
> Effectively making every transfer to the slowest-allowed transfer (not  
> counting local traffic or non-busy nodes). This is why I advocated  
> lowering the slowest transfer time as a general speedup.

Which we tried, and it caused a massive slowDOWN. Mostly because most requests 
do not result in a transfer, so we need a substantial number of requests to 
compensate for that. The real solution is a bulk flag, but we cannot 
implement that until 0.7.5 has shipped.
> 
> >> and if we reserve
> >> some space we will not hit our bandwidth goal. Or that requests are
> >> actually "competing" like collisions on a busy ethernet channel  
> >> rather
> >> than having an order.
> >
> > Yes, it is very much like that.
> 
> Does that mean that the logical alternative is token passing?
> 
> In fact... guarantee-able request acceptance (token passing), might  
> actually be a logical prerequisite for a fair-queueing system.
> 
> Interestingly any node can measure how many requests it can accept  
> right now (for bandwidth), but only can guess as to it's peers (by  
> backoff); so then, we may well accept requests which we cannot make  
> good on, because our peers cannot accept them (ethernet collision  
> logic at every hop).

Yes. On 0.5 we tried to estimate the rate at which we could accept requests 
and publish this, but we could never take our peers' inter-request times into 
account when calculating our own without getting network collapse.

Some system involving queueing is likely the best we can achieve. Token 
passing (advertising to several nodes when we have an available slot) is 
really just an essential optimisation for queueing. But by definition 
queueing results in all requests spending longer on a hop, and some requests 
spending much longer (and perhaps timing out). Simulated queueing (randomly 
dropping requests to maintain queue lengths, but routing immediately those we 
accept) might be an alternative for lower latency at the cost of lower 
accuracy. Maximum queue times could be radically different for requests with 
and without the bulk flag.
> 
> >> One thing that I was playing around with earlier was re-factoring
> >> PacketThrottle to be viewed from the queue-side. Rather than
> >> "sendThrottledPacket" blocking till "a good time" to enqueue a  
> >> message
> >> based on the throttle, that all the packets would be serially
> >> available interleaved (e.g. PacketThrottle.getBulkPackets(n); returns
> >> the next 'n' packets).
> >
> > Good idea... I thought it was somewhat like that already? It is  
> > important in
> > some cases for it to block...
> 
> Only in that they feed an outbound message queue rather than actually  
> sending a packet. But it looks to me like a bit of cruft from a  
> previous design.
> 
> In any case, you have a design note in the comments of PacketThrottle  
> that it would be better to have a sorted list or red/black tree rather  
> than a ticket system (where all threads wakeup); maybe a new class  
> needs to be written (BulkQueue) that *only* interleaves waiters (round  
> robin?) and the packet throttle then used only for actually sending  
> packets.

Yeah...

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to