Re: [freenet-support] NPE in download/upload queue database dump
On Wed, Dec 8, 2010 at 1:40 PM, Matthew Toseland wrote: > On Wednesday 08 December 2010 15:12:24 Juiceman wrote: > > I came across a fatal node startup error with 1309 or 1310_pre2 where i > > deleted persistent temp but not node.db4o. I had to delete node.db4o to > > startup. > > Please post the log. > You may have already resolved this... INFO | jvm 1| 2010/12/07 21:59:39 | Deleted 0 of 0 temporary files (0 non-temp files in temp directory) in 0s INFO | jvm 1| 2010/12/07 21:59:39 | Old: F:\Freenet\persistent-temp-10789 prefix freenet-temp- from F:\Freenet\persistent-temp-10789 old path F:\Freenet\persistent-temp-10789 old parent F:\Freenet INFO | jvm 1| 2010/12/07 21:59:39 | New: F:\Freenet\persistent-temp-10789 prefix freenet-temp- from F:\Freenet\persistent-temp-10789 INFO | jvm 1| 2010/12/07 21:59:39 | Creating free blocks cache... INFO | jvm 1| 2010/12/07 21:59:39 | Creating free blocks cache: 0 / 0 INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: Error in WrapperListener.start callback. java.lang.ArrayIndexOutOfBoundsException: 2005 INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: java.lang.ArrayIndexOutOfBoundsException: 2005 INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.support.BitArray.setBit(BitArray.java:69) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.support.io.PersistentBlobTempBucketFactory.createFreeBlocksCache(PersistentBlobTempBucketFactory.java:192) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.support.io.PersistentBlobTempBucketFactory.onInit(PersistentBlobTempBucketFactory.java:130) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.support.io.PersistentTempBucketFactory.load(PersistentTempBucketFactory.java:273) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.node.NodeClientCore.initPTBF(NodeClientCore.java:607) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.node.NodeClientCore.(NodeClientCore.java:289) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.node.Node.(Node.java:1744) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at freenet.node.NodeStarter.start(NodeStarter.java:170) INFO | jvm 1| 2010/12/07 21:59:39 | WrapperManager Error: at org.tanukisoftware.wrapper.WrapperManager$11.run(WrapperManager.java:2979) INFO | jvm 1| 2010/12/07 21:59:40 | Shutting down... INFO | jvm 1| 2010/12/07 21:59:40 | Stopping database jobs... INFO | jvm 1| 2010/12/07 21:59:40 | Rolling back unfinished transactions... INFO | jvm 1| 2010/12/07 21:59:40 | Closing database... INFO | jvm 1| 2010/12/07 21:59:40 | [db4o 7.4.63.11890 2010-12-07 21:59:40] INFO | jvm 1| 2010/12/07 21:59:40 | 'F:\Freenet\node.db4o.crypt' close request INFO | jvm 1| 2010/12/07 21:59:40 | [db4o 7.4.63.11890 2010-12-07 21:59:40] INFO | jvm 1| 2010/12/07 21:59:40 | 'F:\Freenet\node.db4o.crypt' closed STATUS | wrapper | 2010/12/07 21:59:41 | <-- Wrapper Stopped > > On Dec 5, 2010 12:41 AM, "Roland Haeder" wrote: > > > Hello support, > > > > > > I have added around 8,700 files and after 3,000 are downloaded I'm > stuck > > > with two different NPEs and an ArrayIndexOutOfBoundsException. > > > > > > Here is the first one: > > > http://www.mxchange.org/downloads/freenet/npe1.txt > > > > > > It just happens while startup phase. > > > > > > The second one happens when I try to download a freenet-site e.g.: > > > > > > http://127.0.0.1:/u...@85gztciqo9iepdagvjkto9d-zms1liabr6jb85m4ens,VGDItiCVzCcWAay51faZzcIfAepzeHpzXYvChlueWYE,AQACAAE/stats/1126/ > > > > > > Here is the NPE: > > > http://www.mxchange.org/downloads/freenet/npe2.txt > > > > > > The last exception happens when I have entered my master password > > > (security levels are: normal high high): > > > > http://www.mxchange.org/downloads/freenet/array-index-out-of-bounds.txt > > > > > > I have already tried to delete persistent-temp directory and also > > > datastore/ but I would really welcome if you can fix them instead I > > > delete my (still) large download queue... > > > > > > I have latest build 1308. > > > > > > Best regards, > > > Roland > > > > > > ___ > Support mailing list > Support@freenetproject.org > http://news.gmane.org/gmane.network.freenet.support > Unsubscribe at > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support > Or mailto:support-requ...@freenetproject.org?subject=unsubscribe > -- I may disagree with what you have to say, but I shall defend, to the death, your right to say it. - Voltaire Those who would give up Liberty, to purchase temporary Safety, deserve neither Liberty nor Safety. - Ben Franklin ___ Support mailing list Support@freenetproject.org http://news.gmane.org/gmane.netwo
[freenet-support] Load management: Some interesting figures and next steps
The by-HTL stats include the average distance between the key and the ideal location for each HTL. Yesterday on my node, the closest point for CHKs is 0.0003 at HTL 13, and for SSKs it is 0.0041 at HTL 14. Note the extra zero! So we are nearly 12 times better at routing for SSKs than for CHKs! Another time (today), the ideal for SSKs is 0.0006 at HTL 15 and 13, but for CHKs it is 0.0063 at HTL 15. So the good news is we converge pretty quickly - the bad news is for CHKs we seem to not be able to get close to the target. IMHO this is because of my turning off some of the fairness between types logic. It was not possible to make it work with the new load management changes. There is a new version that will be deployed with the final new load management. But it also shows what is possible, and IMHO confirms my theory that misrouting as a result of mishandling of load is the main reason why performance is relatively poor. I doubt that reinstating the old fairness between types logic would do anything more than converge them both on 0.003 or thereabouts. What we need is the new load management logic. NEXT STEPS: * 1310 includes the first part of the bulk flag. This is necessary for new load management because new load management involves queueing i.e. is relatively high latency; by marking low latency requests explicitly, we can optimise them for latency, while using more of the available capacity for requests marked bulk that only care about throughput (most requests imho). Realtime requests have shorter timeouts, have priority for transfers (but with a scheme to prevent starvation), are accepted by load limiting in much fewer numbers, and are assumed to be very bursty. * Next week I will introduce code to actually set the bulk flag. For now it is always set to bulk. Next week we will set it to realtime for fproxy requests, and allow FCP apps to set it if they want. * We will also eliminate turtles, and increase the number of bulk requests allowed significantly. * Requests which are rejected due to short-term load management are marked as "soft" rejects. A very small change to make the node not remember such requests, so that other nodes can try again and maybe get accepted, should improve routing slightly. It is also a prerequisite for new-load-management which uses a limited number of retries in some cases (because while we know how many requests we are responsible for, we don't know how many requests other nodes are doing on our peer). * Two-stage timeout is important for new load management: If a downstream node times out, we need our peer to tell us that it has recognised the timeout. This allows us to know exactly when it is no longer running the request, and thus have an accurate count of how many of our requests are running on our peer, which is vital for new-load-management. * We will start sending the messages which indicate exactly how many requests are running and how many can be accepted from the peer. No more than one of these (for each of bulk and realtime) will be included in any given packet and it will be up to date at time of sending. * Then we can actually use them! The core of new load management is queueing so that we can get routed to a node reasonably close to our ideal. If a node is way below the median capacity, or if it is severely backed off, we won't wait for it, but otherwise we try to wait for our first choice. Lots of work has already been done towards this, for instance recently the changes making requests essentially threadless are a prerequisite for the bulk flag's greatly increasing the number running. signature.asc Description: This is a digitally signed message part. ___ Support mailing list Support@freenetproject.org http://news.gmane.org/gmane.network.freenet.support Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support Or mailto:support-requ...@freenetproject.org?subject=unsubscribe
[freenet-support] Freenet 0.7.5 build 1310
Build 1310 is now available. Please upgrade, it will be mandatory on Monday. Changes include: - A big part of the bulk (throughput) vs realtime (latency) flag. This is the next big chunk of new load management to be merged. We don't actually use it yet, I will release a build next week which sends realtime-flagged requests for fproxy; currently everything is marked as bulk. There are a few other pieces left before we can sort out load management properly, and they will be merged shortly. This part should *hopefully* have no immediate effect. - Small refactoring to crypto, and documentation, to make it clearer what is going on. Slight improvement to security of encrypted temp buckets (applies only to new tempfiles). - Fix a few small issues on the web interface. - Handle XHTML (especially with charsets) better. - Slight improvements ot the PNG filter. - Small fixes for blob temp files, probably not serious. Thanks! signature.asc Description: This is a digitally signed message part. ___ Support mailing list Support@freenetproject.org http://news.gmane.org/gmane.network.freenet.support Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support Or mailto:support-requ...@freenetproject.org?subject=unsubscribe