On Apr 9 2008, Daniel Cheng wrote:
>Disk are getting cheaper and cheaper...
>Also, high data redundancy means we can drop any blocks of them without
>problem, right?

I'm not sure it's that simple - imagine two files with unequal popularity. 
If we increase the redundancy of both files, causing some blocks to 
dropped, what will happen to the reliability of the two files?

> The only potential problem I have in mind is the LRU drop policy on store 
> full. All blocks of an unpopular item may drops around the same time if 
> we use this policy..

Good point.

>I think if the redundancy is high enough, we should use:
>   - Random drop old data on store full.
>   - LRU drop on Cache full.
>which should give a good balance of data retention and load balancing

I've recently done some simulations of LRU vs FIFO vs random replacement, 
but I haven't had time to write up the results yet. The short version is 
that random replacement performs better than LRU or FIFO for some 
workloads, and isn't significantly worse for any workload. I didn't 
simulate multi-block files or FEC, though.

Cheers,
Michael

Reply via email to