We have several conflicting goals here:
- Minimise request latency for fproxy.
- Maximise the probability of a request succeeding.
- Maximise throughput for large downloads.
- Use all available upstream bandwidth.
- Don't break routing by causing widespread backoff.

What makes this even more problematic:
- The probability of a CHK request succeeding is under 10%.

This may improve, but probably not in the near future.

Therefore, in order to use up our bandwidth, we need to accept enough requests 
that we usually have a queue of data to send, even though most of them will 
fail. Which means that sometimes we will have a lot of requests succeed, and 
the request latency rises dramatically - which is bad for fproxy.

Hence IMHO we need two kinds of requests:
- Bulk requests: The vast majority of CHK requests are long-term queued 
requests for large files. Latency doesn't matter much for these.
- Realtime requests: Fproxy requests are fewer, but have strong latency 
requirements, as well as needing a reasonable psuccess. They are especially 
important for new users trying out Freenet.

I appreciate this has been suggested before on the lists and I have opposed 
it ... I now think I was wrong. Who was it suggested it first? Thelema maybe? 
The main reason I opposed it was that it makes it easier to distinguish 
requests. It also increases complexity. IMHO it is necessary nonetheless, and 
anyway later on when we have passive requests (IMHO vital both for low uptime 
and low latency/polling) we will need something similar.

The first thing is to implement the turtling proposal. No request should take 
more than 60 seconds to transfer, only the node responsible for a severe 
delay should be backed off from, the rest of the chain should treat it as a 
quick transfer failure / DNF, and the node closest should transfer the data 
anyway and offer it via ULPRs. The node sending should treat it as a normal 
request. Hopefully this will be working today. After that maybe some tuning 
of the load limiting code, but I suspect there isn't much we can do given the 
opposing forces mentioned above.

Longer term (post 0.8), a bulk request flag makes a lot of sense. We can 
optimise load limiting for throughput for bulk requests, and for latency for 
realtime requests. For bulk requests, we can accept lots of requests to fill 
up our bandwidth usage, and if they all succeed then it costs us some latency 
but it isn't a big deal. For realtime requests, we can accept a smaller 
number of requests, so that even if they all succeed the latency cost will be 
reasonable. Realtime requests will still have turtling support. Bulk requests 
will not (although they will have longer transfer timeouts): when a bulk 
transfer times out, we wait say 30 seconds to see if there is a cancellation; 
if there is a cancellation, we treat it like a DNF and don't backoff, if 
there isn't, the slow node is our peer and we backoff from them. Realtime 
requests' block transfers will have higher priority, but because we accept 
few of them bulk request transfers will still go through.

Attachment: pgpl4RlktyiVA.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to