thelema wrote:

> I'm just suprised that everyone now seems to agree that we need
> redundancy when before everyone seemed to be saying "hell no, keep that
> redundancy away", and I had to compromise with a system that allowed
> both redundant and non-redundant usage.


Not everyone agrees on the splitting plans with redunancy yet. I don't, 
for example, but gave up resistance as I seem to be a tiny minority. I 
am against too eagerly splitting of files in the first place (There were 
discussions where a 64kb/128kb split size was proposed, which I consider 
ridiculous).

a) splitting increases the likelihood of a retrieval failure (more 
potential possibilities of one or more missing piece(s))
    p_tot = p_retr^splitparts
    (file=1,01 MB, splitsize=256kb, splitparts=5, without redundancy)
     p_retr=99% -> p_tot = 95%
     p_retr=90% -> p_tot = 59%

b) splitting might require redundancy in the split files to be able to 
compensate a) -> more data in Freenet -> more data drops out -> 
decreased reliability and storage capacity

c) splitfiles lead to more overhead as the last part has to be filled 
with mumbo-jumbo to reach the std size -> more data in Freenet... see b)

I see there a vicious circle coming up, which I don't really like.

Mind, I am not against splitting per se, it might be appropriate for 
*big* files, but why should my 386kb .gif be splitted in two parts?

Sebastian


_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to