Re: [zfs-discuss] I screwed up my zpool

2007-12-04 Thread jonathan soons
Why didn't this command just fail?

# zpool add tank c4t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

I did not use '-f' and yet my configuration was changed. That was unexpected 
behaviour.

Thanks for the advice tho, I will proceed with recreating the zpool.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I screwed up my zpool

2007-12-03 Thread jonathan soons
revised indentation:

mirror2 / # zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
raidz2ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c4t0d0ONLINE   0 0 0

errors: No known data errors
mirror2 / #
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I screwed up my zpool

2007-12-03 Thread jonathan soons
mirror2 / # zpool history
History for 'tank':
2007-11-07.14:15:19 zpool create -f tank raidz2 c0t0d0 c0t1d0 c0t2d0 c2t0d0 
c2t1d0 c2t2d0 c3t0d0 c3t1d0 c3t2d0 c3t3d0
2007-11-07.14:17:21 zfs set atime=off tank
2007-11-07.14:18:16 zfs create tank/datatel
2007-11-07.14:52:16 zfs set mountpoint=/datatel tank/datatel
2007-11-07.14:52:31 zfs create tank/u
2007-11-07.14:52:47 zfs set mountpoint=/u tank/u
2007-11-08.11:20:48 zpool scrub tank
2007-11-09.13:21:26 zpool online tank c3t0d0
2007-11-09.13:29:48 zpool replace tank c3t0d0
2007-12-02.18:07:40 zfs create tank/backup
2007-12-02.18:08:22 zfs set mountpoint=/backup tank/backup
2007-12-03.14:42:28 zpool add tank c4t0d0

mirror2 / #

I thought that c4t0d0 would be added to tank (raidz2). That is not what 
happened. tank is unaltered in df:
mirror2 / # df -k
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d0   7801199 5783703 193948575%/
/devices   0   0   0 0%/devices
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap   538001416   52384 3%/etc/svc/volatile
objfs  0   0   0 0%/system/object
fd 0   0   0 0%/dev/fd
swap   52384   0   52384 0%/tmp
swap   52424  40   52384 1%/var/run
tank/datatel 86510592 50856620 2131058571%/datatel
tank 86510592  58 21310585 1%/tank
tank/u   86510592 1561634 21310585 7%/u
tank/backup  86510592 12778742 2131058538%/backup
mirror2 / #


 but in zpool list tank is larger than before but not by much:

mirror2 / # zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank101G   77.9G   22.9G77%  ONLINE -
mirror2 / #

and in zpool status:
mirror2 / # zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
raidz2ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c4t0d0ONLINE   0 0 0 #this line is indented to be under 
tank not raidz2

errors: No known data errors
mirror2 / #

c4t0d0 is not part of raidz2. How can I fix this?
I cannot remove c4t0d0 and I cannot offline it.
Ideally I would like to create another zpool with c4t0d0 plus some more disks
since there are more than the recommended number of disks in tank
already.

jonathan soons
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I screwed up my zpool

2007-12-03 Thread Cindy . Swearingen

Jonathan,

Thanks for providing the zpool history output. :-)

You probably missed the message after this command:

# zpool add tank c4t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

I provided some guidance on what you can do with a RAID-Z configuration,
here:

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6i2?a=view#gaypw

Currently, you can't add devices to an existing RAID-Z conf. You could
have done something like this:

# zpool add tank raidz c4t0d0 c4t1d0 c4t2d0 c4t3d0 ...

to create 2 top-level RAID-Z devices of 10 disks each, but this config
isn't recommended.

I don't think you can do anything to resolve your 8-disk RAID-Z config +
1 disk until zpool remove is implemented for this kind of removal,
except to backup your data and recreate the pool. You might take a
look at our BP site for RAID-Z config recommendations:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

I hope someone else has a better answer.

Cindy


jonathan soons wrote:
 revised indentation:
 
 mirror2 / # zpool status
   pool: tank
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
 raidz2ONLINE   0 0 0
 c0t1d0  ONLINE   0 0 0
 c0t2d0  ONLINE   0 0 0
 c2t1d0  ONLINE   0 0 0
 c2t2d0  ONLINE   0 0 0
 c3t0d0  ONLINE   0 0 0
 c3t1d0  ONLINE   0 0 0
 c3t2d0  ONLINE   0 0 0
 c3t3d0  ONLINE   0 0 0
 c4t0d0ONLINE   0 0 0
 
 errors: No known data errors
 mirror2 / #
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I screwed up my zpool

2007-12-03 Thread Anton B. Rang
 2007-11-07.14:15:19 zpool create -f tank raidz2 [ ... ]
 2007-12-03.14:42:28 zpool add tank c4t0d0

 c4t0d0 is not part of raidz2. How can I fix this?

Back up your data; destroy the pool; and re-create it.

 Ideally I would like to create another zpool with c4t0d0 plus some more disks
 since there are more than the recommended number of disks in tank
 already.

This would be a convenient time to do that; though I think what you probably 
would want is a single zpool with two RAID-Z2 groups (vdevs).  (There isn't a 
recommended maximum number of disks for a pool, but there is for a single vdev.)

Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss