On Thu, Apr 10, 2008 at 12:13 AM, Michael Rogers <m.rogers at cs.ucl.ac.uk> 
wrote:
> On Apr 9 2008, Daniel Cheng wrote:
>  >Disk are getting cheaper and cheaper...
>  >Also, high data redundancy means we can drop any blocks of them without
>  >problem, right?
>
>  I'm not sure it's that simple - imagine two files with unequal popularity.
>  If we increase the redundancy of both files, causing some blocks to
>  dropped, what will happen to the reliability of the two files?

I guess both files are spread over a number of node, and only a small portion
would overlap. (just guess, i don't know how freenet really works)
Just one popular file can't push another file out.

In previous posts, I had proposed to heal *only* a random portion of
file of block.
This make redundancy grown invert exponentially with popularity..
It's really hard for a single file to had that kind of popularity.

>  > The only potential problem I have in mind is the LRU drop policy on store
>  > full. All blocks of an unpopular item may drops around the same time if
>  > we use this policy..
>
>  Good point.
>
>
>  >I think if the redundancy is high enough, we should use:
>  >   - Random drop old data on store full.
>  >   - LRU drop on Cache full.
>  >which should give a good balance of data retention and load balancing
>
>  I've recently done some simulations of LRU vs FIFO vs random replacement,
>  but I haven't had time to write up the results yet. The short version is
>  that random replacement performs better than LRU or FIFO for some
>  workloads, and isn't significantly worse for any workload. I didn't
>  simulate multi-block files or FEC, though.
>
>  Cheers,
>  Michael
>
>
> _______________________________________________
>  Tech mailing list
>  Tech at freenetproject.org
>  http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech
>

Reply via email to