> From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
> Mattias Pantzare
> 
> It
> is about 1 vdev with 12 disk or  2 vdev with 6 disks. If you have 2
> vdev you have to read half the data compared to 1 vdev to resilver a
> disk.

Let's suppose you have 1T of data.  You have 12-disk raidz2.  So you have
approx 100G on each disk, and you replace one disk.  Then 11 disks will each
read 100G, and the new disk will write 100G.

Let's suppose you have 1T of data.  You have 2 vdev's that are each 6-disk
raidz1.  Then we'll estimate 500G is on each vdev, so each disk has approx
100G.  You replace a disk.  Then 5 disks will each read 100G, and 1 disk
will write 100G.

Both of the above situations resilver in equal time, unless there is a bus
bottleneck.  21 disks in a single raidz3 will resilver just as fast as 7
disks in a raidz1, as long as you are avoiding the bus bottleneck.  But 21
disks in a single raidz3 provides better redundancy than 3 vdev's each
containing a 7 disk raidz1.

In my personal experience, approx 5 disks can max out approx 1 bus.  (It
actually ranges from 2 to 7 disks, if you have an imbalance of cheap disks
on a good bus, or good disks on a crap bus, but generally speaking people
don't do that.  Generally people get a good bus for good disks, and cheap
disks for crap bus, so approx 5 disks max out approx 1 bus.)  

In my personal experience, servers are generally built with a separate bus
for approx every 5-7 disk slots.  So what it really comes down to is ...

Instead of the Best Practices Guide saying "Don't put more than ___ disks
into a single vdev," the BPG should say "Avoid the bus bandwidth bottleneck
by constructing your vdev's using physical disks which are distributed
across multiple buses, as necessary per the speed of your disks and buses."

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to