On 05/15/2011 09:58 PM, Richard Elling wrote:
In one of my systems, I have 1TB mirrors, 70% full, which can be
sequentially completely read/written in 2 hrs. But the resilver took 12
hours of idle time. Supposing you had a 70% full pool of raidz3, 2TB disks,
using 10 disks + 3 parity, and a usage pattern similar to mine, your
resilver time would have been minimum 10 days,
bollix
likely approaching 20 or 30
days. (Because you wouldn't get 2-3 weeks of consecutive idle time, and the
random access time for a raidz approaches 2x the random access time of a
mirror.)
totally untrue
BTW, the reason I chose 10+3 disks above was just because it makes
calculation easy. It's easy to multiply by 10. I'm not suggesting using
that configuration. You may notice that I don't recommend raidz for most
situations. I endorse mirrors because they minimize resilver time (and
maximize performance in general). Resilver time is a problem for ZFS, which
they may fix someday.
Resilver time is not a significant problem with ZFS. Resilver time is a much
bigger problem with traditional RAID systems. In any case, it is bad systems
engineering to optimize a system for best resilver time.
-- richard
Actually I have seen resilvers take a very long time (weeks) on
solaris/raidz2 when I almost never see a hardware raid controller take
more than a day or two. In one case i thrashed the disks absolutely as
hard as I could (hardware controller) and finally was able to get the
rebuild to take almost 1 week.. Here is an example of one right now:
pool: raid3060
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 224h54m, 52.38% done, 204h30m to go
config:
ZFS resilver can take a very long time depending on your usage pattern.
I do disagree with some things he said though... like a 1TB drive being
able to be read/written in 2 hours? I seriously doubt this. Just reading
1 TB in 2 hours means an average speed of over 130 megabytes/sec.
Only really new 1TB drives will even hit that type of speed at the
begging of the drive and the average would be much closer to around 100
MB/sec at the end of the drive. Also that is best case scenario. I know
1TB drives (when they first came out) took aound 4-5 hours to do a
complete read of all data on the disk at full speed.
Definitely no way to be that fast with reading *and* writing 1TB of data
to the drive. I guess if you count reading from one and writing to the
other. 3 hours is a much more likely figure and best case.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss