Jim Klimov wrote:
On 2013-02-12 10:32, Ian Collins wrote:
Ram Chander wrote:
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After
which  which we added 24 more disks and created additional vdevs. The
initial vdevs are filled up and so write speed declined. Now  how to
find files that are present in a Vdev or a disk. That way I can remove
and re-copy back to distribute data. Any other way to solve this ?

The only way is to avoid the problem in the first place by not mixing
vdev sizes in a pool.


I was a bit quick off the mark there, I didn't notice that some vdevs were older than others.

Well, that disbalance is there - in the zpool status printout we see
raidz1 top-level vdevs of size 5, 5, 12, 7, 7, 7 disks and some 5 spares
- which seems to sum up to 48 ;)

The vdev sizes are about (including parity space) 14, 14, 22, 19, 19, 19TB respectively and 127TB total. So even if the data is balanced, the performance of this pool will still start to degrade once ~84TB (about 2/3 full) are used.

So the only viable long term solution is a rebuild, or putting bigger drives in the two smallest vdevs.

In the short term, when I've had similar issues I used zfs send to copy a large filesystem within the pool then renamed the copy to the original name and deleted the original. This can be repeated until you have an acceptable distribution.

One last thing: unless this is some form of backup pool, or the data on it isn't important, avoid raidz vdevs in such a large pool!

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to