Take a look at the "Histogram of requested keys" and compare it to the
"Histogram of successful externally requested keys" in the Node Status
Interface. Notice anything odd ?

I certainly have. The longest bars in successfull are the shortest in the
requests. In other words, my node is much better at getting some keys than
others (which is probably a good thing since it implies specialization)
but these are the keys I get the least requests for (which is a bad thing
and implies serious bugs somewhere). In short, the better I become to
fetch a given key, the less likely it gets routed to me - or so it seems
to me, at least. Can anyone give any more rational explanation to this ?

The attached statistics are very small because I had to restart my node
recently after a period of downtime, and the traffick hasn't really picked
up yet. However, take note of keys d and e - there is less requests for d
than e, despite d having more successes (not just a higher percentage, but
more total successes). The same thing persists, even when there has been
tens of thousands requests.

This behaviour has persisted for several builds (don't know when it
started).

BTW. My node typically has some 50 MB of data waiting to be
received. This means that whatever gets retrieved last has waited
over 40 minutes, and will then wait on the send queue... Which means that
by the time it gets to whoever requested it, the requester has most likely
got it from somewhere else, and it has thus only managed to delay whatever
data was retrieved after it... I know that data is retrieved/sent on
several parallel connections simultaneously, but this still seems a
problem to me. Would it make sense to put an upper limit to data queues
(and reject queries/inserts if we get over it) ? It would also have to be
a per-peer and adjusted by the peers transfer speed - maybe an upper limit
on the estimated time to finish sending/receiving the current queue
to/from the peer ? Then, if this was a query (or insert), consider the
peers above this limit "busy" and don't route to them. If everyone is
busy, reject the query and hope it will be routed to less busy part of
network.

Things of note:
I'm running the unstable branch, and updating about once per day. I have 1
GB datastore (which, by the way, has shown very little specialization so
far). I have 20 kB up/downstream dedicated to Freenet.
Unfortunately, I can't keep my machine up all the time; it's typically up
from 6 to 13 hours a day (should I run transient ?)

***************************''

Histogram of requested keys.
This count has nothing to do with keys in your datastore
Nov 8, 2003 6:15:56 PM
keys: 73
scale factor: 1.0 (This is used to keep lines < 64 characters)

   0 |===
   1 |=======
   2 |==
   3 |========
   4 |==
   5 |=====
   6 |==
   7 |=====
   8 |=
   9 |====
   a |======
   b |====
   c |===
   d |=====
   e |=========
   f |=======

peaks (count/mean)
1 --> (1.5342466)
3 --> (1.7534246)
5 --> (1.0958904)
7 --> (1.0958904)
a --> (1.3150685)
e --> (1.9726027)

*************************

Histogram of successful externally requested keys.
This count has nothing to do with keys in your datastore
Nov 8, 2003 6:15:52 PM
keys: 5
scale factor: 1.0 (This is used to keep lines < 64 characters)

   0 |=
   1 |
   2 |
   3 |
   4 |
   5 |=
   6 |
   7 |
   8 |
   9 |
   a |
   b |
   c |
   d |==
   e |=
   f |

******************

Histogram of keys in in fred's data store
These are the keys to the data in your node's local cache (DataStore)
Nov 8, 2003 6:29:40 PM
keys: 3539
scale factor: 0.26556017994880676 (This is used to keep lines < 64 characters)

   0 |============================================================
   1 |==========================================================
   2 |======================================================
   3 |==========================================================
   4 |==================================================
   5 |=========================================================
   6 |================================================================
   7 |============================================================
   8 |========================================================
   9 |============================================================
   a |====================================================
   b |============================================================
   c |============================================================
   d |==============================================================
   e |===========================================================
   f |===============================================================

peaks (count/mean)
0 --> (1.0307996)
3 --> (0.99463123)
6 --> (1.0895734)
9 --> (1.0262786)
d --> (1.0579259)
f --> (1.0760102)

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to