Same here; the store_hit_ratio is only 2.3% of the cache_hit_ratio:

Node status overview

networkSizeEstimateSession: 688 nodes
networkSizeEstimate24h: 368 nodes
networkSizeEstimate48h: 440 nodes
nodeUptime: 6d9h


Store size

Cached keys: 257.159 (7.84 GiB)
Stored keys: 59.060 (1.80 GiB)
Overall size: 316.219/491.021 (9.65 GiB/14.9 GiB)
Cache hits: 21.622 / 127.625 (16%)
Store hits: 444 / 113.433 (0%)
Avg. access rate: 0/s



Am Freitag, 27. Oktober 2006 19:52 schrieb toad:
> Two more from Frost:
>
> ----- Anonymous ----- 2006.10.09 - 02:17:14GMT -----
> nodeUptime: 1d3h
> Freenet 0.7 Build #990 r10647
>
> * Cached keys: 922,080 (28.1 GiB)
> * Stored keys: 17,904 (559 MiB)
> * Overall size: 939,984/1,227,554 (28.6 GiB/37.4 GiB)
> * Cache hits: 2,841 / 25,117 (11%)
> * Store hits: 105 / 20,426 (0%)
> * Avg. access rate: 0/s
>
> ----- Anonymous ----- 2006.10.09 - 11:47:34GMT -----
>
> Node status overview
>
> * bwlimitDelayTime: 71ms
> * nodeAveragePingTime: 428ms
> * networkSizeEstimateSession: 401 nodes
> * avrConnPeersPerNode: 6,438919 peers
> * nodeUptime: 22h2m
> * missRoutingDistance: 0,0451
> * backedoffPercent: 34,7%
> * pInstantReject: 0,0%
>
>
> Current activity
>
> * Inserts: 4
> * Requests: 23
> * ARK Fetch Requests: 7
>
>
> Peer statistics
>
> * Connected: 10
> * Backed off: 5
> * Disconnected: 7
> * Listen Only: 1
>
>
> Peer backoff reasons
>
> * ForwardRejectedOverload 5
>
> Location swaps
>
> * locChangeSession: 8,703159E-1
> * locChangePerSwap: 1,611696E-2
> * locChangePerMinute: 6,581585E-4
> * swapsPerMinute: 4,083639E-2
> * noSwapsPerMinute: 1,358188E0
> * swapsPerNoSwaps: 3,006682E-2
> * swaps: 54
> * noSwaps: 1796
> * startedSwaps: 9091
> * swapsRejectedAlreadyLocked: 17537
> * swapsRejectedRateLimit: 10
>
>
> Bandwidth
>
> * Total Output: 703 MiB (9.08 KiBps)
> * Payload Output: 502 MiB (6.48 KiBps) (71%)
> * Total Input: 557 MiB (7.19 KiBps)
> * Output Rate: 9.96 KiBps
> * Input Rate: 7.91 KiBps
>
>
> Store size
>
> * Cached keys: 35 894 (1.09 GiB)
> * Stored keys: 27 020 (844 MiB)
> * Overall size: 62 914/61 377 (1.91 GiB/1.87 GiB)
> * Cache hits: 1 700 / 47 833 (3%)
> * Store hits: 188 / 37 905 (0%)
> * Avg. access rate: 1/s
>
>
> JVM info
>
> * Used Java memory: 93.8 MiB
> * Allocated Java memory: 108 MiB
> * Maximum Java memory: 190 MiB
> * Available CPUs: 1
> * Running threads: 162
>
> On Sat, Oct 07, 2006 at 12:01:21AM +0100, toad wrote:
> > 1. THE STORE IS *LESS* EFFECTIVE THAN THE CACHE!
> > ------------------------------------------------
> >
> > Please could people post their store statistics? Cache hits, store hits,
> > cached keys, stored keys.
> >
> > So far:
> > [23:11] <nextgens> # Cached keys: 6,389 (199 MiB)
> > [23:11] <nextgens> # Stored keys: 24,550 (767 MiB)
> > [23:09] <nextgens> # Cache hits: 217 / 12,738 (1%)
> > [23:09] <nextgens> # Store hits: 14 / 10,818 (0%)
> >
> > (Cached hits / cached keys) / (Stored hits / stored keys) = 59.56
> >
> > [23:12] <cyberdo> # Cached keys: 17,930 (560 MiB)
> > [23:12] <cyberdo> # Stored keys: 24,895 (777 MiB)
> > [23:14] <cyberdo> # Cache hits: 178 / 3,767 (4%)
> > [23:14] <cyberdo> # Store hits: 11 / 2,970 (0%)
> >
> > (Cached hits / cached keys) / (Stored hits / stored keys) = 22.47
> >
> > [23:14] <sandos> # Cached keys: 45,148 (1.37 GiB)
> > [23:14] <sandos> # Stored keys: 16,238 (507 MiB)
> > [23:11] <sandos> # Cache hits: 41 / 861 (4%)
> > [23:11] <sandos> # Store hits: 5 / 677 (0%)
> >
> > (Cached hits / cached keys) / (Stored hits / stored keys) = 2.95
> >
> > Thus, in practice, the cache is far more efficient than the store.
> >
> > The cache caches every key fetched or inserted through this node.
> >
> > The store stores only keys inserted, and of those, only those for which
> > there is no closer node to the key amongst our peers.
> >
> >
> > The cache being more effective than the store (and note that the above
> > is for CHKs only) implies either:
> > 1. Routing is broken.
> > 2. There is more location churn than the store can cope with.
> > 3. There is more data churn than the store can cope with.
> >
> >
> > 2. SUSPICIONS OF EXCESSIVE LOCATION CHURN
> > -----------------------------------------
> >
> > ljn1981 said that his node would often do a swap and then reverse it.
> > However several people say their location is more or less what it was.
> > It is necessary to make a log of a node's location changes over time...
> >
> >
> > 3. PROBE REQUESTS NOT WORKING
> > -----------------------------
> >
> > "Probe requests" are a new class of requests which simply take a
> > location, and try to find the next location - the lowest location
> > greater than the one they started with. Here's a recent trace (these can
> > be triggered by telneting to 2323 and typing PROBEALL:, then watching
> > wrapper.log):
> >
> > LOCATION 1: 0.00917056526893234
> > LOCATION 2: 0.009450590423585203
> > LOCATION 3: 0.009507800765948482
> > LOCATION 4: 0.03378227720218496
> > [ delays ]
> > LOCATION 5: 0.033884263580090224
> > [ delays ]
> > LOCATION 6: 0.03557139211207139
> > LOCATION 7: 0.04136594238104219
> > LOCATION 8: 0.06804731119243879
> > LOCATION 9: 0.06938071503433951
> > LOCATION 10: 0.11468659860500963
> > [ big delays ]
> > LOCATION 11: 0.11498938134581993
> > LOCATION 12: 0.11800179518614218
> > LOCATION 13: 0.1180104005154885
> > LOCATION 14: 0.11907112718505641
> > LOCATION 15: 0.3332896508938398
> > [ biggish delays ]
> > LOCATION 16: 0.6963082287578662
> > LOCATION 17: 0.7003642648424434
> > LOCATION 18: 0.7516363167204175
> > LOCATION 19: 0.7840227104081505
> > LOCATION 20: 0.8238921670991454
> > LOCATION 21: 0.8551853934902863
> > LOCATION 22: 0.8636946791670825
> > LOCATION 23: 0.8755575572906827
> > LOCATION 24: 0.883042607673485
> > LOCATION 25: 0.8910451777595195
> > LOCATION 26: 0.8930966991557874
> > LOCATION 27: 0.8939968594038799
> > LOCATION 28: 0.8940798222254085
> > LOCATION 29: 0.8941104802690825
> > LOCATION 30: 0.9103443172876444
> > LOCATION 31: 0.9103717579924239
> > LOCATION 32: 0.9107237145701387
> > LOCATION 33: 0.9108357699627044
> > LOCATION 34: 0.9130496893125409
> > LOCATION 35: 0.9153056056305631
> > [ delays ]
> > LOCATION 36: 0.9180229911856111
> > LOCATION 37: 0.9184676396364483
> > LOCATION 38: 0.9198162081803294
> > LOCATION 39: 0.9232383399833453
> > [ big delays ]
> > LOCATION 40: 0.9232484869765467
> > LOCATION 41: 0.9398827726484242
> > LOCATION 42: 0.9420672052844097
> > LOCATION 43: 0.9442367949642505
> > LOCATION 44: 0.9521296958111133
> > [ big delays ]
> > LOCATION 45: 0.9521866483104723
> > LOCATION 46: 0.9562645053030697
> > LOCATION 47: 0.9715290823566148
> > LOCATION 48: 0.9722492845296398
> > LOCATION 49: 0.974283274258849
> > [ big delays ... ]
> >
> > Clearly there are more than around 50 nodes on freenet at any given
> > time, and the above includes some really big jumps, as well as some
> > really small ones. This may be a problem with probe requests, but it
> > is suspicious...
> >
> >
> >
> > _______________________________________________
> > Devl mailing list
> > Devl at freenetproject.org
> > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to