Re: [zfs-discuss] Remove disk

2012-12-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
> 
> On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih  wrote:
>  Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
> 
> > 2) replace the disks with larger ones one-by-one, waiting for a
> > resilver in between
> 
> This is the point I don't see how to do it. I've 48 disk actually from
> /dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To.

You have 12 x 2T disks in a raidz2, and you want to replace those disks with 4T 
each.  Right?

Start with a scrub.  Wait for it to complete.  Ensure you have no errors.

sudo format -e < /dev/null > before.txt
Then "zpool offline" one disk.  Pull it out and stick a new 4T disk in its 
place.  "devfsadm -Cv" to recognize the new disk.
sudo format -e < /dev/null > after.txt
diff before.txt after.txt
You should see one device disappeared, and a new one was created.
Now "zpool replace" to replace the old disk with the new disk.

"zpool status" should show the new drive resilvering.
Wait for the resilver to finish.

Repeat 11 more times.  Replace each disk, one at a time, with resilver in 
between.

When you're all done, it might expand to the new size automatically, or you 
might need to play with the "autoexpand" property to make use of the new 
storage space.

What percentage full is your pool?
When you're done, please write back to tell us how much time this takes.  I 
predict it will take a very long time, and I'm curious to know exactly how 
much.  Before you start, I'm going to guess ...  80% full, and 7-10 days to 
resilver each drive.  So the whole process will take you a few months to 
complete.  (That's the disadvantage of a bunch of disks in a raidzN 
configuration.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-06 Thread Freddie Cash
On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih  wrote:

>  Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
>
> > 2) replace the disks with larger ones one-by-one, waiting for a
> > resilver in between
>
> This is the point I don't see how to do it. I've 48 disk actually from
> /dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To.
>
> I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..
>
> So I add physically a new enclosure with new 12 disks for example 4To disk.
>
> I'm going to have new /dev/da48 --> /dev/da59.
>
> Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0.
> The first raidz2 going to be in «degraded state». So I going to tell the
> pool the new disk is /dev/da48.
>

zpool replace  da0 da48



> repeat this_process until /dev/da11 replace by /dev/da59.
>
> But at the end how many space I'm going to use on those /dev/da48 -->
> /dev/da51. Am I going to have 3To or 4To ? Because each time before
> complete ZFS going to use only 3 To how at the end he going to magically
> use 4To ?
>

The first disk you replace, it will use 3 TB, the size of the disk it
replaced.

The second disk you replace, it will use 3 TB, the size of the disk it
replaced.

...

The 12th disk you replace, it will use 3 TB, the size of the disk it
replaced.

However, now that all of the disks in the raidz vdev have been replaced,
the overall size of the vdev will increase to use the full 4 TB of each
disk.  This either happens automatically (autoexpand property is on), or
manually by export/import the pool.

Second question, when I'm going to pull out the first enclosure meaning the
> old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
> new number of those disk meaning
>
> old /dev/da12 --> /dev/da0
> old /dev/da13 --> /dev/da1
> etc...
> old /dev/da59 --> /dev/da47
>
> how zfs going to manage that ?
>

Every disk that is part of a ZFS pool has metadata on it, that includes
which pool it's part of, which vdev it's part of, etc.  Thus, if you do an
export followed by an import, then ZFS will read the metadata off the disks
and sort things out automatically.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-06 Thread Jim Klimov

On 2012-12-06 09:35, Albert Shih wrote:


1) add a 5th top-level vdev (eg. another set of 12 disks)


That's not a problem.


That IS a problem if you're going to ultimately remove an enclosure -
once added, you won't be able to remove the extra top-level VDEV from
your ZFS pool.


2) replace the disks with larger ones one-by-one, waiting for a
resilver in between


This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To.

I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..

So I add physically a new enclosure with new 12 disks for example 4To disk.

I'm going to have new /dev/da48 --> /dev/da59.

Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0.


I believe FreeBSD should perform similarly to that in Solaris-based
OSes. Since your pools are not yet "broken", and since you have the
luxury of all disks being present during migration, it is safer not
to pull out a disk physically and put a new one in its place
(physically or via hotsparing), but rather to try software replacement
with "zpool replace". This way your pool does not lose redundancy for
the duration of replacement.


The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.

repeat this_process until /dev/da11 replace by /dev/da59.


Roughly so. Other list members might chime in - but MAYBE it is even
possible or advisable to do software replacement on all 12 disks in
parallel (since the originals are all present)?


But at the end how many space I'm going to use on those /dev/da48 -->
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ?


While the migration is underway and some but not all disks have
completed it, you can only address the old size (3To); when your
active disks are all big - you'd suddenly see the pool expand to
use the available space (if the autoexpand property is on), or
use a series of "zpool online -e componentname".


When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one.


Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning

old /dev/da12 --> /dev/da0
old /dev/da13 --> /dev/da1
etc...
old /dev/da59 --> /dev/da47

how zfs going to manage that ?



Supposedly, it should manage that well :)
Once your old enclosure's disks are not used anyway, so you can remove
it, you should "zpool export" your pool before turning off the hardware.
This would remove it from the OS's "zfs cachefile", and upon the next
import the pool would undergo a full search for components. It is slower
than cachefile when you have many devices at static locations, because
it ensures that all storage devices are consulted and the new map of
the pool components' locations is drawn. Thus the device numbering
would change somehow due to HW changes and OS reconfiguration, then
the full zpool import will take note of this and import old data from
new addresses (device-names).

HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-06 Thread Albert Shih
 Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
Hi,

Sorry, I'm very busy those past few days. 

> >> >
> >> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> 
> The commands described on that page do not have direct equivalents in
> zfs. There is currently no way to reduce the number of "top-level
> vdevs" in a pool or to change the RAID level.

OK. 

> >> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 
> >> > disk
> >> > I've 36x 3T and 12 x 2T.
> >> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> >> > to migrate all data on those 12 old disk on the new and remove those old
> >> > disk ?
> 
> In your specific example this means that you have 4 * RAIDZ2 of 12
> disks each. Zfs doesn't allow you to reduce that there are 4. Zfs
> doesn't allow you to change any of them from RAIDZ2 to any other
> configuration (eg RAIDZ). Zfs doesn't allow you to change the fact
> that you have 12 disks in a vdev.

OK thanks. 

> 
> If you don't have a full set of new disks on a new system, or enough
> room on backup tapes to do a backup-restore, there are only two ways
> for to add capacity to the pool:
> 1) add a 5th top-level vdev (eg. another set of 12 disks)

That's not a problem. 

> 2) replace the disks with larger ones one-by-one, waiting for a
> resilver in between

This is the point I don't see how to do it. I've 48 disk actually from
/dev/da0 -> /dev/da47 (I'm under FreeBSD 9.0) lets say 3To. 

I've 4 raidz2 the first from /dev/da0 -> /dev/da11 etc..

So I add physically a new enclosure with new 12 disks for example 4To disk. 

I'm going to have new /dev/da48 --> /dev/da59. 

Say I want remove /dev/da0 -> /dev/da11. First I pull out the /dev/da0. 
The first raidz2 going to be in «degraded state». So I going to tell the
pool the new disk is /dev/da48.

repeat this_process until /dev/da11 replace by /dev/da59.

But at the end how many space I'm going to use on those /dev/da48 -->
/dev/da51. Am I going to have 3To or 4To ? Because each time before
complete ZFS going to use only 3 To how at the end he going to magically
use 4To ? 

Second question, when I'm going to pull out the first enclosure meaning the
old /dev/da0 --> /dev/da11 and reboot the server the kernel going to give
new number of those disk meaning 

old /dev/da12 --> /dev/da0
old /dev/da13 --> /dev/da1
etc...
old /dev/da59 --> /dev/da47

how zfs going to manage that ? 

> > When I would like to change the disk, I also would like change the disk
> > enclosure, I don't want to use the old one.
> 
> You didn't give much detail about the enclosure (how it's connected,
> how many disk bays it has, how it's used etc.), but are you able to
> power off the system and transfer the all the disks at once?

Server : Dell PowerEdge 610
4 x enclosure : MD1200 with 12 disk of 3To
Connection : SAS
SAS Card : LSI
enclosures are chained : 

server --> MD1200.1 --> MD1200.2 --> MD1200.3 --> MD1200.4


> 
> 
> > And what happen if I have 24, 36 disks to change ? It's take mounth to do
> > that.
> 
> Those are the current limitations of zfs. Yes, with 12x2TB of data to
> copy it could take about a month.

OK. 

> 
> If you are feeling particularly risky and have backups elsewhere, you
> could swap two drives at once, but then you lose all your data if one
> of the remaining 10 drives in the vdev failed.

OK. 

Thanks for the help

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 6 déc 2012 09:20:55 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-02 Thread Andrew Gabriel

Bob Friesenhahn wrote:

On Sat, 1 Dec 2012, Jan Owoc wrote:



When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one.


You didn't give much detail about the enclosure (how it's connected,
how many disk bays it has, how it's used etc.), but are you able to
power off the system and transfer the all the disks at once?


And what happen if I have 24, 36 disks to change ? It's take mounth 
to do

that.


Those are the current limitations of zfs. Yes, with 12x2TB of data to
copy it could take about a month.



You can create a brand new pool with the new chassis and use 'zfs 
send' to send a full snapshot of each filesystem to the new pool. 
After the bulk of the data has been transferred, take new snapshots 
and send the remainder. This expects that both pools can be available 
at once. 


or if you don't care about existing snapshots, use Shadow Migration to 
move the data across.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-02 Thread Bob Friesenhahn

On Sat, 1 Dec 2012, Jan Owoc wrote:



When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one.


You didn't give much detail about the enclosure (how it's connected,
how many disk bays it has, how it's used etc.), but are you able to
power off the system and transfer the all the disks at once?



And what happen if I have 24, 36 disks to change ? It's take mounth to do
that.


Those are the current limitations of zfs. Yes, with 12x2TB of data to
copy it could take about a month.



You can create a brand new pool with the new chassis and use 'zfs 
send' to send a full snapshot of each filesystem to the new pool. 
After the bulk of the data has been transferred, take new snapshots 
and send the remainder.  This expects that both pools can be available 
at once.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-01 Thread craig
If you are attaching a new enclosure, make a new zpool in that enclosure with a 
temporary name and 'zfs send' snapshots from the old pool to the new pool, 
reading in with 'zfs recv'.



Craig Cory
Senior Instructor
ExitCertified.com

On Dec 1, 2012, at 3:20 AM, Albert Shih  wrote:

> Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
>> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>> 
>>> Hi all,
>>> 
>>> I would like to knwon if with ZFS it's possible to do something like that :
>>> 
>>>http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
>>> 
>>> meaning : 
>>> 
>>> I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
>>> I've 36x 3T and 12 x 2T.
>>> Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
>>> to migrate all data on those 12 old disk on the new and remove those old
>>> disk ?
>> 
>> You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
>> replace, if you don't have autoreplace on)
>> Repeat until done.
> 
> Wellin fact it's littre more complicate than that. 
> 
> When I would like to change the disk, I also would like change the disk
> enclosure, I don't want to use the old one. 
> 
> And what happen if I have 24, 36 disks to change ? It's take mounth to do
> that. 
> 
> Regards.
> 
> JAS
> -- 
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> 5 Place Jules Janssen
> 92195 Meudon Cedex
> Téléphone : 01 45 07 76 26/06 86 69 95 71
> xmpp: j...@obspm.fr
> Heure local/Local time:
> sam 1 déc 2012 12:17:39 CET
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-01 Thread Jan Owoc
Hi Albert,


On Sat, Dec 1, 2012 at 4:20 AM, Albert Shih  wrote:
>  Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
>> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>> > I would like to knwon if with ZFS it's possible to do something like that :
>> >
>> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html

The commands described on that page do not have direct equivalents in
zfs. There is currently no way to reduce the number of "top-level
vdevs" in a pool or to change the RAID level.


>> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
>> > I've 36x 3T and 12 x 2T.
>> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
>> > to migrate all data on those 12 old disk on the new and remove those old
>> > disk ?

In your specific example this means that you have 4 * RAIDZ2 of 12
disks each. Zfs doesn't allow you to reduce that there are 4. Zfs
doesn't allow you to change any of them from RAIDZ2 to any other
configuration (eg RAIDZ). Zfs doesn't allow you to change the fact
that you have 12 disks in a vdev.


>> You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
>> replace, if you don't have autoreplace on)
>> Repeat until done.

If you don't have a full set of new disks on a new system, or enough
room on backup tapes to do a backup-restore, there are only two ways
for to add capacity to the pool:
1) add a 5th top-level vdev (eg. another set of 12 disks)
2) replace the disks with larger ones one-by-one, waiting for a
resilver in between


> When I would like to change the disk, I also would like change the disk
> enclosure, I don't want to use the old one.

You didn't give much detail about the enclosure (how it's connected,
how many disk bays it has, how it's used etc.), but are you able to
power off the system and transfer the all the disks at once?


> And what happen if I have 24, 36 disks to change ? It's take mounth to do
> that.

Those are the current limitations of zfs. Yes, with 12x2TB of data to
copy it could take about a month.

If you are feeling particularly risky and have backups elsewhere, you
could swap two drives at once, but then you lose all your data if one
of the remaining 10 drives in the vdev failed.


Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-12-01 Thread Albert Shih
 Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
> 
> > Hi all,
> > 
> > I would like to knwon if with ZFS it's possible to do something like that :
> > 
> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> > 
> > meaning : 
> > 
> > I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
> > I've 36x 3T and 12 x 2T.
> > Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> > to migrate all data on those 12 old disk on the new and remove those old
> > disk ? 
> 
> You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
> replace, if you don't have autoreplace on)
> Repeat until done.

Wellin fact it's littre more complicate than that. 

When I would like to change the disk, I also would like change the disk
enclosure, I don't want to use the old one. 

And what happen if I have 24, 36 disks to change ? It's take mounth to do
that. 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
sam 1 déc 2012 12:17:39 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Jan Owoc
On Fri, Nov 30, 2012 at 9:05 AM, Tomas Forsman  wrote:
>
> I don't have it readily at
> hand how to check the ashift value on a vdev, anyone
> else/archives/google?
>

This? ;-)
http://lmgtfy.com/?q=how+to+check+the+ashift+value+on+a+vdev&l=1

The first hit has:
# zdb mypool | grep ashift

Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Tomas Forsman
On 30 November, 2012 - Jim Klimov sent me these 2,3K bytes:

> On 2012-11-30 15:52, Tomas Forsman wrote:
>> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>>
>>> Hi all,
>>>
>>> I would like to knwon if with ZFS it's possible to do something like that :
>>>
>>> http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
>
> Removing a disk - no, one still can not reduce the amount of devices
> in a zfs pool nor change raidzN redundancy levels (you can change
> single disks to mirrors and back), nor reduce disk size.
>
> As Tomas wrote, you can increase the disk size by replacing smaller
> ones with bigger ones.

.. unless you're hit by 512b/4k sector crap. I don't have it readily at
hand how to check the ashift value on a vdev, anyone
else/archives/google?

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Jim Klimov

On 2012-11-30 15:52, Tomas Forsman wrote:

On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:


Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html


Removing a disk - no, one still can not reduce the amount of devices
in a zfs pool nor change raidzN redundancy levels (you can change
single disks to mirrors and back), nor reduce disk size.

As Tomas wrote, you can increase the disk size by replacing smaller
ones with bigger ones.

With sufficiently small starting disks and big new disks (i.e. moving
up from 1-2Tb to 4Tb) you can "cheat" by putting several partitions
on one drive and giving that to different pool components - if your
goal is to reduce the amount of hardware disks in the pool.

However, note that:

1) A single HDD becomes a SPOF, so you should put pieces of different
raidz sets onto particular disks - if a HDD dies, it does not bring
down a critical amount of pool components and does not kill the pool.

2) The disk mechanics will be "torn" between many requests to your
pool's top-level VDEVs, probably greatly reducing achievable IOPS
(since the TLVDEVs are accessed in parallel).

So while possible, this cheat is useful as a temporary measure -
i.e. while you migrate data and don't have enough drive bays to
hold the old and new disks, and want to be on the safe side by not
*removing* a good disk in order to replace it with a bigger one.
With this "cheat" you have all data safely redundantly stored on
disks at all time during migration. In the end this disk can be
the last piece of the puzzle in your migration.



meaning :

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ?


You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
replace, if you don't have autoreplace on)
Repeat until done.
If you have the physical space, you can first put in a new disk, tell it
to replace and then remove the old.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk

2012-11-30 Thread Tomas Forsman
On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:

> Hi all,
> 
> I would like to knwon if with ZFS it's possible to do something like that :
> 
>   http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
> 
> meaning : 
> 
> I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
> I've 36x 3T and 12 x 2T.
> Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
> to migrate all data on those 12 old disk on the new and remove those old
> disk ? 

You pull out one 2T, put in a 4T, wait for resilver (possibly tell it to
replace, if you don't have autoreplace on)
Repeat until done.
If you have the physical space, you can first put in a new disk, tell it
to replace and then remove the old.

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk

2012-11-30 Thread Albert Shih
Hi all,

I would like to knwon if with ZFS it's possible to do something like that :

http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html

meaning : 

I have a zpool with 48 disks with 4 raidz2 (12 disk). Inside those 48 disk
I've 36x 3T and 12 x 2T.
Can I buy new 12x4 To disk put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ? 

Regards.


-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
ven 30 nov 2012 15:18:32 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Ross Walker

On Aug 5, 2009, at 8:50 AM, Ketan  wrote:


How can we remove disk from zfs pool, i want to remove disk c0d3

zpool status datapool
 pool: datapool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   datapoolONLINE   0 0 0
 c0d2  ONLINE   0 0 0
 c0d3  ONLINE   0 0 0


You can't in that non-redundant pool.

Copy data off, destroy and re-create.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Andre van Eyssen

On Wed, 5 Aug 2009, Ketan wrote:


How can we remove disk from zfs pool, i want to remove disk c0d3


[snip]

Currently, you can't remove a vdev without destroying the pool.

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Ketan
How can we remove disk from zfs pool, i want to remove disk c0d3 

 zpool status datapool
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
datapoolONLINE   0 0 0
  c0d2  ONLINE   0 0 0
  c0d3  ONLINE   0 0 0

I
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove disk (again)

2008-04-02 Thread Colby Gutierrez-Kraybill
I'll take this as a yes.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] remove disk (again)

2008-03-31 Thread Colby Gutierrez-Kraybill
I intended to  add a disk as a hot spare to a zpool but inadvertently added as 
an equal partner of the entire pool. i.e.

zpool add ataarchive c1t1d0
instead of
zpool add ataarchive spare c1t1d0

This is a zpool on an X4500 with 4 raidz2's configured.  From my reading of 
previous threads, the ZFS FAQ and the wikipedia entry, it looks as though the 
only way I can remove the disk now is to destroy the zpool and then rebuild it. 
 Is this still the case until Nevada b77?

- Colby "never type anything at 4am without the benefit of stimulants" 
Gutierrez-Kraybill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss