A small follow-up on my tests, just in case readers are
interested in some numbers: the UltraStar 3Tb disk got
filled up by a semi-random selection of data from our old
pool in 24 hours sharp, including large dump files and
small sourcedirs via rsync, and zome recursive zfs sends
of VM storage including autosnaps ranging from near-zero
size to considerable increments.

Overall the write speed on the on-disk pool ranged from
about 3-6Mb/s for small files to 40-95Mb/s for larger
ones (i.e. ISOs and VM disk images).

The resulting zpools include a bit of spare space (AFAIK
to fight fragmentation), roughly 4Gb per 250Gb of pool
size, but no more userdata can be added into datasets:

# zpool list
...
test   2.50T  2.46T  40.0G    98%  ONLINE  -
test2   232G   228G  4.37G    98%  ONLINE  -

# df -k /test /test2
Filesystem            kbytes    used   avail capacity  Mounted on
test                 2642411542 2372859619       0   100%    /test
test2                239468544 238689091  778716   100%    /test2


The two filled-up pools are scrubbing now in search for
disk errors as well as feared-expected errors due to
possible overflow in some LBA-address counter or something
like that which prevented snv_117 from seeing the full
disk size in the first place. Current impressions are
that all is ok, knocking on wood.

Scrubbing reads at 35-90Mb/s, leaning more to the ~75,
with the disk processing over 600IOps at 100%busy in
iostat. Little fragmentation from one write with no
deletions so far is oh-so-good! ;)




2012-05-17 1:21, Jim Klimov wrote:
2012-05-15 19:17, casper....@oracle.com wrote:
Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.

(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb)

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pond 10.2T 9.49T 724G 93% ONLINE -
rpool 232G 120G 112G 51% ONLINE -
test 2.50T 76.5K 2.50T 0% ONLINE -
test2 232G 76.5K 232G 0% ONLINE -


Now writing stuff into the new test pools to see if any
conflicts arise in snv_117's support of the disk size.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to