On Tuesday 09 March 2010 13:20:08 Daniel Stork wrote:
Hi, I'm having problems with files stuck at 100% and having to wait hours and
days for them to decode.
It seems that despite having all the parts freenet just sits on it for some
reason. Is there some reason for this?
I have created this bug on the bug tracker for your problem. *Please* could you
register a user account and Monitor Issue on this bug:
Yes. Most likely it is not in fact stuck at 100% of the file but at 100% of the
top layer of the file. Big files get divided into multiple layers, with each
layer telling us how to fetch the next. The last layer contains the actual data
and the last but one layer contains all CHKs for the actual data. For a big
file, the last layer is HUGE, and so are the data structures corresponding to
it. Because we use a database (which allows us to fetch big files on small
amounts of memory), and because our data structures are relatively inefficient,
processing the last but one layer for a big file can take a *long* time
involving a vast amount of disk access. If this is the problem, the best way to
solve it is to change the data structures; we will do so when we make the next
set of major changes (probably combined with LDPC codes and using the same
decryption key for all blocks of a splitfile). Another possible answer is that
it is decompressing, but I wouldn't expect disruptive levels of disk I/O if
this is the problem.
One way to find out is to get a thread dump from the Statistics page while the
node is busy doing these things. This is written to wrapper.log, so please get
one while it is hammering away at 100%, and then send us it.
Would it be possible that you guys fix this? cuz it would be great not to
have to wait days to decode a file that's already here.
As a great many national governments have discovered, anything involving
databases is hard. :|
It will get fixed, but not immediately. However, that thread dump would be
The node.db4o file also becomes enourmous for some reason (over half Gb)
after a few days.
You can defrag it, there are two config options related to this, one is
something like defrag on restart and the other is whether to reset the first
option after defragging. Then restart the node. This should dramatically reduce
the size of node.db4o, for a short while. But make sure it doesn't run out of
disk space during the defrag.
It seems to be that this is what causes the files to take so long to decode,
and it is also making the a high-end computer totally unresponsive.
Disks suck. Especially domestic grade disks, but all disks suck. And
unfortunately handling large amounts of data with limited RAM involves disks
(most cheap SSDs suck too, although there are some very good recent ones). But
I agree it could be *vastly* more efficient. It's simply a question of priority
- for how many users is this a serious problem, more serious than e.g. the fact
that, for example, the file never completes in the first place?
In the meantime, until this hopefully gets fixed, could you tell me if
there's a way to change the location of the persistent temp folder,
I would like to put the node.db4o on a ram disk, but the persistent temp is
too large for this, so I would have to separate the two,
and could not find settings in freenet.ini,
There is a config option for moving the persistent temp dir in advanced mode.
I'm not sure if it's settable on the fly. You can however shut the node down
Thanks a lot, and thanks for all your work on freenet, it's awsome.
Description: This is a digitally signed message part.
Support mailing list
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support