Re: [zfs-discuss] Zpool resize

2011-04-04 Thread For@ll

W dniu 01.04.2011 14:50, Richard Elling pisze:

On Apr 1, 2011, at 4:23 AM, For@ll wrote:


Hi,

LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm changing 
LUN size on netapp and solaris format see new value but zpool still have old 
value.
I tryed zpool export and zpool import but it didn't resolve my problem.

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0d1DEFAULT cyl 6523 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
   1. c2t1d0NETAPP-LUN-7340-22.00GB
  /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0
Specify disk (enter its number): ^C
bash-3.00# zpool list
NAME  SIZE  ALLOC   FREECAP  HEALTH  ALTROOT
TEST 9,94G93K  9,94G 0%  ONLINE  -



What can I do that zpool show new value?


zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard


I tried your suggestion, but no effect.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread eXeC001er
try to export and next import your volume.

2011/4/4 For@ll for...@stalowka.info

 W dniu 01.04.2011 14:50, Richard Elling pisze:

 On Apr 1, 2011, at 4:23 AM, For@ll wrote:

  Hi,

 LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm
 changing LUN size on netapp and solaris format see new value but zpool still
 have old value.
 I tryed zpool export and zpool import but it didn't resolve my problem.

 bash-3.00# format
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
   0. c0d1DEFAULT cyl 6523 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
   1. c2t1d0NETAPP-LUN-7340-22.00GB
  /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0
 Specify disk (enter its number): ^C
 bash-3.00# zpool list
 NAME  SIZE  ALLOC   FREECAP  HEALTH  ALTROOT
 TEST 9,94G93K  9,94G 0%  ONLINE  -



 What can I do that zpool show new value?


 zpool set autoexpand=on TEST
 zpool set autoexpand=off TEST
  -- richard


 I tried your suggestion, but no effect.



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread For@ll

W dniu 04.04.2011 11:56, eXeC001er pisze:

try to export and next import your volume.


This is already doing, but effect is the same, no effect.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 4:49 PM, For@ll for...@stalowka.info wrote:
 What can I do that zpool show new value?

 zpool set autoexpand=on TEST
 zpool set autoexpand=off TEST
  -- richard

 I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread For@ll

W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:

On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info  wrote:

What can I do that zpool show new value?


zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard


I tried your suggestion, but no effect.


Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.



I choosed format and change type to the auto-configure and now I see new 
value if I choosed partition - print, but when I exit from format and 
reboot the old value is stay. How I can write new settings?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Roy Sigurd Karlsbakk
  I tried your suggestion, but no effect.
 
 Did you modify the partition table?
 
 IIRC if you pass a DISK to zpool create, it would create
 partition/slice on it, either with SMI (the default for rpool) or EFI
 (the default for other pool). When the disk size changes (like when
 you change LUN size on storage node side), you PROBABLY need to resize
 the partition/slice as well.

zpool create won't create a partition or slice, it'll just use the whole drive 
unless you give it a partition or slice. Last I expanded a pool, replacing 21 
2TB drives with 3TB ones, it took some 2-3 minutes before the pool was being 
expanded, and then one VDEV at a time, so simply turning autoexpand on and then 
off may be a bit too quick. Leave it on for a while...

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 7:58 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
 IIRC if you pass a DISK to zpool create, it would create
 partition/slice on it, either with SMI (the default for rpool) or EFI
 (the default for other pool). When the disk size changes (like when
 you change LUN size on storage node side), you PROBABLY need to resize
 the partition/slice as well.

 zpool create won't create a partition or slice, it'll just use the whole 
 drive unless you give it a partition or slice.

Do you have some reference backing it up? It creates EFI label on my
system when I give it whole disk.

There's even a warning on ZFS troubleshooting guide regarding root
pool: make sure you specify a bootable slice and not the whole disk
because the latter may try to install an EFI label
(http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide)

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Fajar A. Nugraha
On Mon, Apr 4, 2011 at 6:48 PM, For@ll for...@stalowka.info wrote:
 When I test with openindiana b148, simply setting zpool set
 autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
 required). Again, you might need to set both autoexpand=on and
 resize partition slice.

 As a first step, try choosing c2t1d0 in format, and see what the
 size of this first slice is.


 I choosed format and change type to the auto-configure and now I see new
 value if I choosed partition - print, but when I exit from format and
 reboot the old value is stay. How I can write new settings?

Be glad it DIDN't write the settings :D

My advice of running format was to see whether you already have a
partition/slice on it. If it does, it should print the settings it
currently has (and give warnings about some slice being part of zfs
pool). If it doesn't, then perhaps the disk doesn't have a partition
table.

Changing partition table/slice to cover new size is somewhat tricky,
and can easily lead to data loss if NOT done properly. Hopefully
someone else will be able to help you. If you don't know anything
about changing partitions, then don't even attempt it.

So, does the disk originally have partition/slice or not? If no, then
zpool set autoexpand=on should be enough.

If it does have partitions, you might want to learn how to resize
partitions/slice properly, or better yet try booting with
openindiana/solaris express live CD (after setting autoexpand=on),
import-export the pool, and see if it can recognize the size change.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Roy Sigurd Karlsbakk
 LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm
 changing LUN size on netapp and solaris format see new value but zpool
 still have old value.
 I tryed zpool export and zpool import but it didn't resolve my
 problem.
 
 bash-3.00# format
 Searching for disks...done
 
 
 AVAILABLE DISK SELECTIONS:
 0. c0d1 DEFAULT cyl 6523 alt 2 hd 255 sec 63
 /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
 1. c2t1d0 NETAPP-LUN-7340-22.00GB
 /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0
 Specify disk (enter its number): ^C
 bash-3.00# zpool list
 NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
 TEST 9,94G 93K 9,94G 0% ONLINE -
 
 What can I do that zpool show new value?

Enable autoexpand. If you created a partition or slice on the drive and put the 
zpool onto that partition or slice, you'll need to change that partition's 
size. If you just used the whole device, autoexpand should work automatically 
once enabled (or after a few minutes - see my earlier post). If you can paste 
the output of 'zpool status', that should show if you're using the whole device 
or a slice.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Albert

W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:

On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info  wrote:

What can I do that zpool show new value?

zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard

I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.


Hi,

I choosed format and change type to the auto-configure and now I see new 
value if I choosed partition - print, but when I exit from format and 
reboot the old value is stay. How I can write new settings?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Roy Sigurd Karlsbakk
  IIRC if you pass a DISK to zpool create, it would create
  partition/slice on it, either with SMI (the default for rpool) or
  EFI
  (the default for other pool). When the disk size changes (like when
  you change LUN size on storage node side), you PROBABLY need to
  resize
  the partition/slice as well.
 
  When I test with openindiana b148, simply setting zpool set
  autoexpand=on is enough (I tested with Xen, and openinidiana reboot
  is
  required). Again, you might need to set both autoexpand=on and
  resize partition slice.
 
  As a first step, try choosing c2t1d0 in format, and see what the
  size of this first slice is.
 
 Hi,
 
 I choosed format and change type to the auto-configure and now I see
 new
 value if I choosed partition - print, but when I exit from format and
 reboot the old value is stay. How I can write new settings?

Normally no change should be needed. Can you please paste the output from 
'zpool status'?
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Cindy Swearingen

Hi Albert,

I didn't notice that you are running the Solaris 10 9/10 release.

Although the autoexpand property is provided, the underlying driver
changes to support the LUN expansion are not available in this release.

I don't have the right storage to test, but a possible workaround is
to create another larger LUN and replace the existing (smaller) LUN with 
a larger LUN by using the zpool replace command. Then, either set the

autoexpand property to on or use the following command:

# zpool online -e TEST LUN

The autoexpand features work as expected in the Oracle Solaris 11
release.

Thanks,

Cindy



On 04/04/11 05:38, Albert wrote:

W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:

On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info  wrote:

What can I do that zpool show new value?

zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard

I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.


Hi,

I choosed format and change type to the auto-configure and now I see new 
value if I choosed partition - print, but when I exit from format and 
reboot the old value is stay. How I can write new settings?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'cannot import 'andaman': I/O error', and failure to follow my own advice

2011-04-04 Thread Miles Nordin
I have a Solaris Express snv_130 box that imports a zpool from two
iSCSI targets, and after some power problems I cannot import the pool.

When I found the machine, the pool was FAULTED with half of most
mirrors shoring CORRUPTED DATA and half showing UNAVAIL.  One of
the two iSCSI enclosures was on, while the other was off.  When I
brought the other iSCSI enclosure up, bringing all the devices in each
of the seven mirror vdev's online, the box paniced.

It went into a panic loop every time it tried to import the problem
pool at boot.  I disabled all the iSCSI targets that make up the
problem pool and brought the box up, then saved a copy of
/etc/zfs/zpool.cache and exported the UNAVAIL pool.  Then I turned the
host off, brought back all the iSCSI targets, and booted without a
crash, hoping I could 'zpool import' the problem pool.

(Another mirrored pool on the same pair of iSCSI enclosures came back
fine and scrubbed with no errors. shrug)

Here is what I get typing some basic commands:

-8-
terabithia:/# zpool import
  pool: andaman
id: 7400719929021713582
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

andaman  ONLINE
  mirror-0   ONLINE
c3t43d0  ONLINE
c3t48d0  ONLINE
  mirror-1   ONLINE
c3t45d0  ONLINE
c3t47d0  ONLINE
  mirror-2   ONLINE
c3t52d0  ONLINE
c3t59d0  ONLINE
  mirror-3   ONLINE
c3t46d0  ONLINE
c3t49d0  ONLINE
  mirror-4   ONLINE
c3t50d0  ONLINE
c3t44d0  ONLINE
  mirror-5   ONLINE
c3t57d0  ONLINE
c3t53d0  ONLINE
  mirror-6   ONLINE
c3t54d0  ONLINE
c3t51d0  ONLINE
terabithia:/# zpool import andaman
cannot import 'andaman': I/O error
Destroy and re-create the pool from
a backup source.
terabithia:/# zpool status
  pool: aboveground
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
aboveground  ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c3t10d0  ONLINE   0 0 0
c3t16d0  ONLINE   0 0 0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors
terabithia:/# zpool import -F andaman
cannot import 'andaman': I/O error
Destroy and re-create the pool from
a backup source.
terabithia:/# zdb -ve andaman

Configuration for import:
version: 22
pool_guid: 7400719929021713582
name: 'andaman'
state: 0
hostid: 2200768359
hostname: 'terabithia.th3h.inner.chaos'
vdev_children: 7
vdev_tree:
type: 'root'
id: 0
guid: 7400719929021713582
children[0]:
type: 'mirror'
id: 0
guid: 337393226491877361
whole_disk: 0
metaslab_array: 14
metaslab_shift: 33
ashift: 9
asize: 1000191557632
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1781150413433362160
phys_path: 
'/iscsi/disk@iqn.2006-11.chaos.inner.th3h.fishstick%3Asd-andaman0001,0:a'
whole_disk: 1
DTL: 91
path: '/dev/dsk/c3t43d0s0'
devid: 
'id1,sd@t49455400020059100f00/a'
children[1]:
type: 'disk'
id: 1
guid: 7841235598547702997
phys_path: 
'/iscsi/disk@iqn.2006-11.chaos.inner.th3h%3Aoldfishstick%3Asd-andaman0001,1:a'
whole_disk: 1
DTL: 215
path: '/dev/dsk/c3t48d0s0'
devid: 
'id1,sd@t494554000200880e0f00/a'
children[1]:
type: 'mirror'
id: 1
guid: 1953060080997571723
whole_disk: 0
metaslab_array: 210
metaslab_shift: 33
ashift: 9
asize: 1000191557632
is_log: 0
children[0]:
type: 'disk'