toad wrote:

> On Mon, Nov 27, 2006 at 10:02:09PM +0100, Jano wrote:
>> Michael Rogers wrote:
>> 
>> > toad wrote:
>> >> In which case, can you detect at what point the throttle becomes the
>> >> limiting factor rather than the submission of requests?
>> > 
>> > The network with no flow control gets better throughput than the
>> > networks with flow control even at very low request rates - this
>> > probably just reflects the fact that it's hard to get 100% utilisation
>> > from an adaptive throttling mechanism. Anyway the difference is small
>> > at low request rates.
>> > 
>> > The difference gets larger as the request rate increases - then
>> > somewhere between 10 and 12 requests per node per minute, the
>> > throughput of the network without flow control drops practically to
>> > zero as the message queues get long enough to cause most requests to
>> > time out.
>> 
>> Are these queues FIFO? Would made any sense to try a LIFO policy governed
>> by available bandwidth?
> 
> So the oldest stuff never completes, instead of the newest stuff never
> completing?

Newer stuff completes instead of nothing completing? A FIFO queue can have
unbounded latency, while a LIFO has just the servicing latency for the
requests that manage to get a slot.

I'm sleepy and I don't know current freenet implementation, so I'm probably
thinking wrong and wasting time, but just for the sake of discussion.

I'll simplify everything to a single queue, for both locally originated
requests and for remote requests. I'll presume it's overloaded due to
outbound BW limits while inbound BW is not a concern due to link asymmetry
(for this discussion at least). Or may be its CPU bound, at first it seems
the same to me.

A FIFO queue will grow unbounded, since we have more inbound capacity than
outbound (plus request processing time). Hence, to avoid unbounded latency
and everything timeouting in the client sides, you need to implement some
request dropping. If this is badly tuned, you'll end wasting all your
servicings (queue too long), or dropping in excess and underusing your
capacity (dropping too soon). Since you can't know the remote side timeout
it seems a risky approach at best.

A LIFO queue has roughly the processing time as latency plus 2 times network
transit (negligible let's say) for a single hop [*]. Thus, a request served
is a request that will likely not timeout at the client side. You simply
won't serve the requests that have the bad luck to be pushed back. You can
limit queue size to avoid unbounded memory usage, but you wont' waste the
requests you serve.

In the remote client side, as long as timeout is measured since reaching
head and not since entering the queue, things don't matter that much. Some
requests will be served and some others not, but the client can't do very
much about it, except enlarge its timeouts. Actually, this is what's done
in ed2k and the like: huge queues, and the advice is to be 24/7 online (so
you, as client, never timeout).

If request dropping is done right, it boils down to a matter of perception
in the client side: for a user doing browsing, what's more upsetting,
everything loading slowly or some things loading quickly and others never?

Now, admittedly this doesn't seem fair: a client with high outbound BW could
starve others client if targetting a single node. It has the "simple"
solution of doing round-robin on your clients (I foresee problems here).

[*] This is omitting the need for re-routing requests that are misses. I
guess the problem lies there. Gonna dream about it...

Surely these situations have been studied for conventional networks? Quick
googling.

http://www.actapress.com/PaperInfo.aspx?PaperID=23645 
This says I'm wrong, if it were applicable.

http://cat.inist.fr/?aModele=afficheN&cpsidt=3464776
This seems relevant.

http://icawww.epfl.ch/hamdi/research.htm
Funnily, the say LIFO is great and have a patent.

Yay, bedtime.


Reply via email to