On 11/08/2011 01:15, Rajiv Chittajallu wrote:
Ted Dunning wrote on 08/10/11 at 10:40:30 -0700:
To be specific, taking a 100 node x 10 disk x 2 TB configuration with drive
MTBF of 1000 days, we should be seeing drive failures on average once per
day. With 1G ethernet and 30MB/s/node dedicated to re-replication, it will
just over 10 minutes to restore replication of a single drive and will take
just over 100 minutes to restore replication of an entire machine.
You are assuming that only one good node is used to restore replication for
all the blocks on the failed drive. Which is very unlikely. With
replication factor of 3, you will have at least 2 nodes to choose from
in the worst case and much more in a standard cluster.
And when you are having more spindles, 6+, one would probably consider
using the second GigE port, which is standard on most of the commodity
gear out there.
I'd be willing to collaborate on writing some paper on these issues if
someone has data they can share; we can look at the observed/predicted
failure rates of 12x2TB HDD systems, and discuss actions (moving blocks
to any local space would be better than over LAN, for example), compare
2x1Gbe with 1x10Gbe for recovery (and ignoring switch cost)