Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-25 Thread Ben
Thanks very much everyone.

Victor, I did think about using VirtualBox, but I have a real machine and a 
supply of hard drives for a short time, for I'll test it out using that if I 
can.

Scott, of course, at work we use three mirrors and it works very well, has 
saved us on occasion where we have detached the third mirror, upgraded, found 
the upgrade failed and have been able to revert from the third mirror instead 
of having to go through backups.

George, it will be great to see the 'autoexpand' in the next release.  I'm 
keeping my home server on stable releases for the time being :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Ben
Hi all,

I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how 
can I do this?  I must break the mirror as I don't have enough controller on my 
system board.  My current mirror looks like this:

[b]r...@beleg-ia:/share/media# zpool status share
pool: share
state: ONLINE
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
share   ONLINE   0 0 0
mirrorONLINE   0 0 0
c5d0s0  ONLINE   0 0 0
c5d1s0  ONLINE   0 0 0

errors: No known data errors[/b]

If I detach c5d1s0, add a 1TB drive, attach that, wait for it to resilver, then 
detach c5d0s0 and add another 1TB drive and attach that to the zpool, will that 
up the storage of the pool?

Thanks very much,
Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread dick hoogendijk
On Wed, 24 Jun 2009 03:14:52 PDT
Ben no-re...@opensolaris.org wrote:

 If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
 resilver, then detach c5d0s0 and add another 1TB drive and attach
 that to the zpool, will that up the storage of the pool?

That will do the trick perfectly. I just did the same last week ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Thomas Maier-Komor
dick hoogendijk schrieb:
 On Wed, 24 Jun 2009 03:14:52 PDT
 Ben no-re...@opensolaris.org wrote:
 
 If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
 resilver, then detach c5d0s0 and add another 1TB drive and attach
 that to the zpool, will that up the storage of the pool?
 
 That will do the trick perfectly. I just did the same last week ;-)
 

Doesn't detaching render the detach disk command the detached disk as a
disk unassociated with a pool? I think it might be better to import the
pool with only one half of the mirror without detaching the disk, and
the do a zpool replace. In this case if something goes wrong during
resilver, you still have the other half of the mirror to bring your pool
back up again. If you detach the disk upfront this won't be possible.

Just an idea...

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Ben
Thomas, 

Could you post an example of what you mean (ie commands in the order to use 
them)?  I've not played with ZFS that much and I don't want to muck my system 
up (I have data backed up, but am more concerned about getting myself in a mess 
and having to reinstall, thus losing my configurations).

Many thanks for both of your replies,
Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Ben
Many thanks Thomas, 

I have a test machine so I shall try it on that before I try it on my main 
system.

Thanks very much once again,
Ben
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Victor Latushkin

On 24.06.09 17:10, Thomas Maier-Komor wrote:

Ben schrieb:
Thomas, 


Could you post an example of what you mean (ie commands in the order to use 
them)?  I've not played with ZFS that much and I don't want to muck my system 
up (I have data backed up, but am more concerned about getting myself in a mess 
and having to reinstall, thus losing my configurations).

Many thanks for both of your replies,
Ben


I'm not an expert on this, and I haven't tried it, so beware:


1) If the pool you want to expand is not the root pool:

$ zpool export mypool
# now replace one of the disks with a new disk
$ zpool import mypool
# zpool status will show that mypool is in degraded state because of a
missing disk
$ zpool replace mypool replaceddisk
# now the pool will start resilvering

# Once it is done with resilvering:
$ zpool detach mypool otherdisk
#  now physically replace otherdisk
$ zpool replace mypool otherdisk


Last command would fail as there would be no longer otherdisk in mypool.

Though you can always play with files first (or with VirtualBox etc):

# preparation

mkdir -p /var/tmp/disks/removed
mkfile -n 64m /var/tmp/disks/disk0
mkfile -n 64m /var/tmp/disks/disk1
mkfile -n 128m /var/tmp/disks/bigdisk0
mkfile -n 128m /var/tmp/disks/bigdisk1
zpool create test mirror /var/tmp/disks/disk0 /var/tmp/disks/disk1
zpool list test

# let's start by making sure there's no latent errors:

zpool scrub test
while zpool status -v test | grep % ; do sleep 1; done
zpool status -v test

zpool export test
mv /var/tmp/disks/disk0 /var/tmp/disks/removed/disk0

# you don't need '-d /path' with real disks
zpool import -d /var/tmp/disks test
zpool status -v test

# insert new disk
mv /var/tmp/disks/bigdisk0 /var/tmp/disks/disk0
zpool replace test /var/tmp/disks/disk0

while zpool status -v test | grep % ; do sleep 1; done
zpool status -v test

# make sure that resilvering is complete
zpool detach test /var/tmp/disks/disk1
mv /var/tmp/disks/disk1 /var/tmp/disks/removed/disk1

# insert new disk
mv /var/tmp/disks/bigdisk1 /var/tmp/disks/disk1
zpool attach test /var/tmp/disks/disk0 /var/tmp/disks/disk1
while zpool status -v test | grep % ; do sleep 1; done
zpool status -v test
zpool list test


hth,
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread George Wilson

Ben wrote:

Hi all,

I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how 
can I do this?  I must break the mirror as I don't have enough controller on my 
system board.  My current mirror looks like this:

[b]r...@beleg-ia:/share/media# zpool status share
pool: share
state: ONLINE
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
share   ONLINE   0 0 0
mirrorONLINE   0 0 0
c5d0s0  ONLINE   0 0 0
c5d1s0  ONLINE   0 0 0

errors: No known data errors[/b]

If I detach c5d1s0, add a 1TB drive, attach that, wait for it to resilver, then 
detach c5d0s0 and add another 1TB drive and attach that to the zpool, will that 
up the storage of the pool?

Thanks very much,
Ben
  


The following changes, which went into snv_116, change this behavior:

PSARC 2008/353 zpool autoexpand property
6475340 when lun expands, zfs should expand too
6563887 in-place replacement allows for smaller devices
6606879 should be able to grow pool without a reboot or export/import
6844090 zfs should be able to mirror to a smaller disk

With this change we introduced a new property ('autoexpand') which you must 
enable if you want devices to automatically grow (this includes replacing them 
with larger ones). You can alternatively use the '-e' (expand) option to 'zpool 
online' to grow individual drives even if 'autoexpand' is disabled. The reason 
we made this change was so that all device expansion would be managed in the 
same way. I'll try to blog about this soon but for now be aware that post 
snv_116 the typical method of growing pools by replacing devices will require 
at least one additional step.

Thanks,
George


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Scott Lawson



Thomas Maier-Komor wrote:

Ben schrieb:
  
Thomas, 


Could you post an example of what you mean (ie commands in the order to use 
them)?  I've not played with ZFS that much and I don't want to muck my system 
up (I have data backed up, but am more concerned about getting myself in a mess 
and having to reinstall, thus losing my configurations).

Many thanks for both of your replies,
Ben



I'm not an expert on this, and I haven't tried it, so beware:


1) If the pool you want to expand is not the root pool:

$ zpool export mypool
# now replace one of the disks with a new disk
$ zpool import mypool
# zpool status will show that mypool is in degraded state because of a
missing disk
$ zpool replace mypool replaceddisk
# now the pool will start resilvering

# Once it is done with resilvering:
$ zpool detach mypool otherdisk
#  now physically replace otherdisk
$ zpool replace mypool otherdisk

  
This will all work well. But I have a couple of suggestions for you as 
well.


If you are using mirrored vdevs then you can also grow the vdev by 
making it a 3 or
a 4 way mirror. This way you don't lose your resiliency in your vdev 
whilst you are migrating
to larger disks.  Now of course you have to be able to take the extra 
device in your system
either via a spare drive bay in a storage enclosure or  SAN or iSCSI 
based LUNS.


When you have a lot of data and the business requires you to minimize 
any risk as much
as possible this is a good idea. The pool was only offline for 14 
seconds to gain the extra

space and at all times there were *always* two devices in my mirror vdev.

Here is a cut and paste from  this process from just the other day with 
a live production server where
the maintenance window was only 5 minutes. This pool was increased from 
300 to 500 GB on LUNS

from two disparate datacentres.

2009-06-17.13:57:05 zpool attach blackboard 
c4t600C0FF00924686710D4CF02d0 c4t600C0FF00082CA2312B99E05d0


2009-06-17.18:12:14 zpool detach blackboard 
c4t600C0FF00080797CC7A87F02d0


2009-06-17.18:12:57 zpool attach blackboard 
c4t600C0FF00924686710D4CF02d0 c4t600C0FF00086136F22B65F05d0


2009-06-17.20:02:00 zpool detach blackboard 
c4t600C0FF00924686710D4CF02d0


2009-06-18.05:58:52 zpool export blackboard

2009-06-18.05:59:06 zpool import blackboard

For home users this is probably overkill, but I thought I would mention 
it for more enterprise type

people that are maybe familiar with disksuiite and not ZFS as much.


2) if you are working on the root pool, just skip export/import part and
boot with only one half of the mirror. Don't forget to run installgrub
after replacing a disk.

HTH,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--
___


Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz




perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss