There is still the question why are some transfers are so slow.
Heres what i thought about that. (nothing new)


MY THEORY

-> most traffic is routed through a node
-> if the node does not serve out of store, up / down bandwith usage will be 
exactly the same (in/out balance)
-> cause the load management targets towards 100% up bandwith usage, all the up 
is used to the max
-> what happens if the node starts now to serve a request (transfer) out of 
store?


CONCLUSION:

this out of store served request is in bad condition cause the upload is 
already in full use (100%)
with routing traffic. Cause up limit is a hard limit there is no room for a 
fast transfer.

If there is always more upload limit than download limit
all requests should pass fast through the node -> reserve is given

- load management targets not to full size of upload, cause the lower down limit
  will trigger the load management.

- inserts are incorporated to load management -> they use incoming bandwith

- if my node serves every 10th request from its store
  it MUST HAVE 10% more upload capacity than download capacity



TESTED:

Currently up limit seems to work as hard limit and down limit seems
to work as soft limit (my node got above it but triggered load management)


In the test i set my limits like this

upload limit ->  20 kb/s
down limit ->  16 kb/s

-> runing transfers seemed to be in good relation to used upload bandwith
-> local requests worked (in relation to what my node can handle / downbandwith 
limit)
    down bandwith load management did also manage how much local request
    can be started, in relation to what the node is able to handle (local 
rejects)
-> it seemed not to oszilate, quite linear runing of bandwith usage
-> down bandwith limit seemed to be used to the max (overtime usage)
-> up bandwith was used MORE than down bandwith (overtime usage) but not to the 
max, still reserves there
    seems that the node was able to serve out of store quite fast using
    the overhead that was generated by the difference between up / down limit


The downlimit as soft limit did quite well, cause it does not add latency like
a hard limit does. The node still can process short "waves" and get over the 
limit.
But on the long run it did come very near to the set limit.

How happen those short waves?
If only 10% request are successfull everything works well if only every 10th 
request
is successfull. But in real world we might get 3 successfull request after 
another and this
generates a wave. So its neccessary that up bandwith usage has headroom/reserve 
to process
such waves. A 100% used upload can not process this waves without latency.




SOLUTIONS:

But we all do know this. So the question is how to operate a good load 
management
that does both a high upload usage and small latencys.


#1    Quick and Dirty - with the current load management

Get the downlimit out of reach from the user and the node sets it by itself
based on the given up limit in config (user input)

as example like this

upload limit ->  20 kb/s  (user input)
down limit ->  16 kb/s  (autoset by node -> upload limit - 20% reserve for 
serving out of store = 80% down limit)

Advantage: local requests are also limited -> in correlation to the capacity of 
the node

Hmm...maybe a sliding down limit based on psuccess or other indicates is an 
option?
Some mathematical thougts here? -> would result in optimzed upload usage
But dont forget 100% upload usage will not give good latency without an QoS for 
requests.



#2  Two upload limits - Soft and Hard

Upload limit hard (current)
 -> stays the same -> bandwith usage does no go beyond

Upload limit soft  (new)
 -> autocalculated by node or set to some value like 80% of hard limit (20% 
reserve for waves / serving out of store)
 -> this limit is the value where upload management targets too


PS:
I think its not that problematic if you make some tests in the wild.
The current Freenet-Users are surely hardcore and will survive some tests.
Was good to see that there is still the will to try something new the hard way
instead of doing small steps over month.






    We have several conflicting goals here:
    - Minimise request latency for fproxy.
    - Maximise the probability of a request succeeding.
    - Maximise throughput for large downloads.
    - Use all available upstream bandwidth.
    - Don't break routing by causing widespread backoff.


--
Ich verwende die kostenlose Version von SPAMfighter für private Anwender,
die bei mir bis jetzt 6002 Spammails entfernt hat.
Rund 5,8 Millionen Leute nutzen SPAMfighter schon.
Laden Sie SPAMfighter kostenlos herunter: http://www.spamfighter.com/lde
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to