[zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Jim Klimov

Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask anyway ;)

1) They hope to use 3Tb disks, and hotplugged an Ultrastar 3Tb
   for testing. However, the system only sees it as a 802Gb
   device, via Solaris format/fdisk as well as via parted [1].
   Is that a limitation of the Marvell controller, disk,
   the current OS (snv_117)? Would it be cleared by a reboot
   and proper disk detection on POST (I'll test tonight) or
   these big disks won't work in X4500, period?

[1] 
http://code.google.com/p/solaris-parted/downloads/detail?name=solaris-parted-0.2.tar.gzcan=2q=


Gotta run now, will ask more in the evening :)
Thanks for now,
//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Casper . Dik

Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask anyway ;)

1) They hope to use 3Tb disks, and hotplugged an Ultrastar 3Tb
for testing. However, the system only sees it as a 802Gb
device, via Solaris format/fdisk as well as via parted [1].
Is that a limitation of the Marvell controller, disk,
the current OS (snv_117)? Would it be cleared by a reboot
and proper disk detection on POST (I'll test tonight) or
these big disks won't work in X4500, period?



Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.

(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Jim Klimov

Urgent interrupt processed, I got back to my questions :)

Thanks Casper for his suggestion, the box is scheduled to
reboot soon and I'll try newer Solaris (oi_151a3 probably)
as well. UPDATE: Yes, oi_151a3 has seen all 2.73Tb of
the disk, so my old question is resolved: the original
Thumper (Sun Fire X4500) does see the 3Tb disks at least
with a current OS, hardware limitations seem to be absent.
The disk is recognized as ATA-Hitachi HUA72303-A580-2.73Tb.

Booted back into snv_117, the box again sees the smaller
disk size - so it is an OS thing indeed. OS migration into
upgrade plans, check! ;}

2012-05-15 13:41, Jim Klimov wrote:

Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask anyway ;)

1) They hope to use 3Tb disks, and hotplugged an Ultrastar 3Tb
for testing. However, the system only sees it as a 802Gb
device, via Solaris format/fdisk as well as via parted [1].
Is that a limitation of the Marvell controller, disk,
the current OS (snv_117)? Would it be cleared by a reboot
and proper disk detection on POST (I'll test tonight) or
these big disks won't work in X4500, period?

[1]
http://code.google.com/p/solaris-parted/downloads/detail?name=solaris-parted-0.2.tar.gzcan=2q=



The Thumper box has 48 250Gb disks, beginning to die off,
now arranged as two zfs pools - an rpool built over the
two bootable drives, and the data pool built as a 45-drive
array as 9*(4+1) raidz1 striped, and one hotspare. AFAIK
the number of raidz vdevs can not be brought down without
compromising data integrity/protection, and it is the only
server around with the ~9Tb storage capacity - so there
are no backups or even nowhere to temporarily and safely
to migrate to. Budget is tight. We are estimating assorted
options, and would like suggestions - perhaps some of the
list users have passed through similar transitions, and/or
know which options to avoid like fire ;)

We know that large redundancy is highly recommended for
big HDDs, so in-place autoexpansion of the raidz1 pool
onto 3Tb disks is out of the question.

So far the plan is to migrate the current pool onto 3Tb
drives, and it seems that with the recommended 3-disk
redundancy for large drives, a raidz3 of 8+3 disks and
one hotspare would fit nicely onto 6 controllers (2 disks
each). Mirroring of 1+2 or 1+3 disks times 5 (minimum
desired new volume) would fill most of the box and cost
a lot for relatively little volume (reading would be fast
though).

What would the experienced people suggest - would raidz3
be good?

Should SSDs help? I'm primarily thinking of L2ARC, though
there is NFS serving and iSCSI serving that might benefit
from ZILs as well. What SSD sizing and models would people
suggest for the 16GB RAM server? AFAIK it might be possible
to replace the RAM up to 32GB (maybe costly), but sadly no
more can be installed according to docs and availability
of compatible memory modules; should the RAM doubling be
pursued?

I know it is hard to give suggestions about something vague;
The storage profile is a bit of everything in a software
development company - homedirs, regular rolling backups,
images of produced software, VM images for test systems
(executed on remote VM hosts, use Thumper's storage via
ZFS/NFS and ZFS/iSCSI), some databases of practically
unlimited capacity for the testbed systems. Fragmentation
is rather high, resilver of one disk took 15 hours; weekly
scrubs take about 85 hours. The server uses a 1Gbit LAN
connection (might become a 4Gbit via aggregation, but the
server did not produce such big bursts of disk storage
even locally as to saturate the one uplink).

Now on to the migration options we brainstormed...

IDEA 1

By far, seems like the safest option: rent or buy a 12-disk
eSATA enclosure and PCI-X adapter (model suggestions welcome
- should support 3TB disks), configure the new pool in the
enclosure, zfs send|zfs recv data, restart local zones with
tasks (databases) and nfs/iscsi services from the new pool.
Ultimately take out disks of old pool, plug the disks of new
pool (and SSDs) inside Thumper, live happy and fast :)

This option requires an enclosure and adapter, with no clues
what to choose and how much that would cost above the raw
disk price.

IDEA 2

Split the original data into several pools, migrate onto
mirrors starting with one big disk.

This idea proposes that the one hotspare disk bay becomes
populated by one new big disk at a time (first one already
inside), and a pool is created on top of this one disk.
Up to 3Tb of data is sent to the new pool, then a new disk
and pool are inserted/created/sent. The original pool
remains intact while the new pools are upgraded to Nway
mirrors and if some sectors do become corrupt - the data
can be restored with some manual fuss about plugging the
old pool back in.

This allows to 

Re: [zfs-discuss] Migration of a Thumper to bigger HDDs

2012-05-15 Thread Bob Friesenhahn
You forgot IDEA #6 where you take advantage of the fact that zfs can 
be told to use sparse files as partitions.  This is rather like your 
IDEA #3 but does not require that disks be partitioned.


This opens up many possibilities.  Whole vdevs can be virtualized to 
files on (i.e. moved onto) remaining physical vdevs.  Then the drives 
freed up can be replaced with larger drives and used to start a new 
pool.  It might be easier to upgrade the existing drives in the pool 
first so that there is assured to be vast amounts of free space and 
the drives get some testing.  There is not initially additional risk 
due to raidz1 in the pool since the drives will be about as full as 
before.


I am not sure what additional risks are involved due to using files.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss