Peter Wood wrote:
I'm using OpenIndiana 151a7, zpool v28, zfs v5.

When I bought my storage servers I intentionally left hdd slots available so I can add another vdev when needed and delay immediate expenses.

After reading some posts on the mailing list I'm getting concerned about degrading performance due to unequal distribution of data among the vdevs. I still have a chance to migrate the data away, add all drives and rebuild the pools and start fresh.

Before going that road I was hoping to hear your opinion on what will be the best way to handle this.

System: Supermicro with 36 hdd bays. 28 bays filled with 3TB SAS 7.2K enterprise drives. 8 bays available to add another vdev to the pool.

Pool configuration:

Will adding another vdev hurt the performance?

How full is the pool?

When I've added (or grown an existing) vdev, I used zfs send to make a
copy of a suitably large filesystem, then deleted the original and
renamed the copy.  I had to do this a couple of times to redistribute
data, but it saved a lot of down time.


zfs-discuss mailing list

Reply via email to