On Mon, Nov 17, 2003 at 03:35:37PM -0600, Salah Coronya wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Niklas Bergh wrote:
> 
> |>|>> This has nothing to do with load balancing, but should improve
> |>|>> routing, while increasing CPU usage by some unknown amount.  Thoughts?
> |>|
> |>|
> |>| Actually this has A LOT to do with load balancing, indirectly.
> |>| Given that the requestor should have a timeout capability, how
> |>| would the requestee handle timeouts on his end ? Just drop them?
> |>| Pass back a QRejected message to the requestor ? Maybe buffer
> |>| these rejections and pass back a block of them, every XXX ms ?
> |>| The central issue is efficient state chain maintenance at the
> |>| requestor.
> |>
> |>Isn't that sort of what happened when we stopped QR'ing due to
> |>bandwidth? We'd keep queuing requests, until either the other node
> |>figured we were taking too long (and lowered out estimate thusly), or
> |>routingTime / messageSendTime exceeded the threshold and starting QR'ing.
> |>
> |>I don't know how Freenet dequeues the requests, but if it did in a FIFO,
> |>~ you are proposing instead if we have a request and node with good
> |>estimate with respect to that request, we move it to the head of the
> |
> | queue?
> |
> | Yea.. I think that pretty much was the thought. Prefer to handle request
> | requests for which we have good estimates somewhat over such requests that
> | we haven't good ones for (not neccesarily dropping the second type.. just
> | nudge good-estimated request somewhat further up the queue). Furthermore..
> | If the node is in need  of sending a QR it could prefer sending it for a
> | request whose key we don't have such a good estimate on...
> |
> | My thought on this is that probably should help specialization to occurr
> | somewhat.
> |
> | /N
> 
> Here's a drastic idea: How about just dropping QR's altogether? As we
> continue to take and route requests, our routingTime / messageSendTime
> will increase, and  our neighbors will sense it. Once our neighbors
> determine we are taking "too long", they should back off (probably using
> linear backoff as opposed to exponential)

That is essentially what I did when I removed the limiting by bandwidth
usage (the main reason for QRing). QR is needed internally for e.g.
looped requests. I could of course have removed the messageSendTime and
routingTime too. I don't think this approach is viable though, which is
why I reinstated bwlimit based QRing.
> 
> Combined with the above idea, we'll send "specialized" data faster than
> "non-specialized" data, which the estimators should reflect.

We will anyway. By the definition of specialized, espectially under NGR.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.2.3 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQE/uT8ohctESbvQ8ZwRArSZAJ9AyOb+6YwUcy+PKxPeGphewT7E6wCeOe+P
> MmW2MkcfSAwT2VOsyBG9JlI=
> =w4Mj
> -----END PGP SIGNATURE-----
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to