Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-08-01 Thread Cindy Swearingen

Hi--

If the S10 patch is installed on this system...

Can you remind us if you ran the zpool online -e command after the
LUN is expanded and the autoexpand propery is set?

I hear that some storage that doesn't generate the correct codes
in response to a LUN expansion so you might need to run this command
even if autoexpand is set.

Thanks,

Cindy



On 07/26/12 07:04, Habony, Zsolt wrote:

There is bug what I mentioned:  SUNBUG:6430818  Solaris Does Not Automatically 
Handle an Increase in LUN Size
Patch for that is: 148098-03

Its readme says:
Synopsis: Obsoleted by: 147440-15 SunOS 5.10: scsi patch

Looking at current version 147440-21, there is reference for the incorporated 
patch, and for the bug id as well.

(from 148098-03)

6228435 undecoded command in var/adm/messages - Error for Command: undecoded 
cmd 0x5a
6241086 format should allow label adjustment when disk/LUN size changes
6430818 Solaris needs mechanism of dynamically increasing LUN size


-Original Message-
From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com]
Sent: 2012. július 26. 14:49
To: Habony, Zsolt
Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ?

imho, the 147440-21 does not list the bugs that solved by 148098- even through 
it obsoletes the 148098




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-08-01 Thread Habony, Zsolt
Hello,

I have run zpool online -e only.
I have not set autoexpand property, (as it is not set by default.)

(My understanding was that a controlled way of expansion is zpool online -e, 
where you decide when to increase actually, 
and a non-controlled fully automatic way was setting autoexpand on.)

I have no detailed description of the bug, as I have no access to internal bug 
database, though it looked like LUN size change is visible for Solaris, (format 
indeed showed a bigger size for me ), but vtoc,  and partition sizes remain the 
old small sizes.  And I would have had to resize partitions manually.

Zsolt


From: Cindy Swearingen [cindy.swearin...@oracle.com]
Sent: Wednesday, August 01, 2012 8:00 PM
To: Habony, Zsolt
Cc: Hung-Sheng Tsao (LaoTsao) Ph.D; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ?

Hi--

If the S10 patch is installed on this system...

Can you remind us if you ran the zpool online -e command after the
LUN is expanded and the autoexpand propery is set?

I hear that some storage that doesn't generate the correct codes
in response to a LUN expansion so you might need to run this command
even if autoexpand is set.

Thanks,

Cindy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-26 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
imho, the 147440-21 does not list the bugs that solved by 148098-
even through it obsoletes the 148098



Sent from my iPad

On Jul 25, 2012, at 18:14, Habony, Zsolt zsolt.hab...@hp.com wrote:

 Thank you for your replies.
 
 First, sorry for misleading info.  Patch 148098-03  indeed not included in 
 recommended set, but trying to download it shows that 147440-15 obsoletes it
 and 147440-19 is included in latest recommended patch set.
 Thus time solves the problem elsewhere.
 
 Just for fun, my case was:
 
 A standard LUN used as a zfs filesystem, no redundancy (as storage already 
 has), and no partition is used, disk is given directly to zpool.
 # zpool status -oraarch
  pool: -oraarch
 state: ONLINE
 scan: none requested
 config:
 
NAME STATE READ WRITE CKSUM
xx-oraarch   ONLINE   0 0 0
  c5t60060E800570B90070B96547d0  ONLINE   0 0 0
 
 errors: No known data errors
 
 Partitioning shows this.  
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 41927902 + 16384 (reserved sectors)
 
 Part  TagFlag First SectorSizeLast Sector
  0usrwm   256  19.99GB 41927902
  1 unassignedwm 0  0  0
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  41927903   8.00MB 41944286
 
 
 As I mentioned I did not partition it, zpool create did.  I had absolutely 
 no idea how to resize these partitions, where to get the available number of 
 sectors and how many should be skipped and reserved ...
 Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) 
 , restored data.  
 
 Partition looks like this now, I do not think I could have created it easily 
 manually.
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 209700062 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm   256   99.99GB  209700062
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2097000638.00MB  209716446
 
 Thank you for your help.
 Zsolt Habony
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-26 Thread Habony, Zsolt
There is bug what I mentioned:  SUNBUG:6430818  Solaris Does Not Automatically 
Handle an Increase in LUN Size
Patch for that is: 148098-03

Its readme says:
Synopsis: Obsoleted by: 147440-15 SunOS 5.10: scsi patch

Looking at current version 147440-21, there is reference for the incorporated 
patch, and for the bug id as well.

(from 148098-03)
 
6228435 undecoded command in var/adm/messages - Error for Command: undecoded 
cmd 0x5a
6241086 format should allow label adjustment when disk/LUN size changes
6430818 Solaris needs mechanism of dynamically increasing LUN size

-Original Message-
From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com] 
Sent: 2012. július 26. 14:49
To: Habony, Zsolt
Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ?

imho, the 147440-21 does not list the bugs that solved by 148098- even through 
it obsoletes the 148098



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Habony, Zsolt
Hello,
There is a feature of zfs (autoexpand, or zpool online -e ) that it can 
consume the increased LUN immediately and increase the zpool size.
That would be a very useful ( vital ) feature in enterprise environment.

Though when I tried to use it, it did not work.  LUN expanded and visible in 
format, but zpool did not increase.
I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase 
in LUN Size) 
Bad luck. 

Patch exists: 148098 but _not_ part of recommended patch set.  Thus my fresh 
install Sol 10 U9 with latest patch set still has the problem.  ( Strange that 
this problem 
 is not considered high impact ... )

It mentiones a workaround :   zpool export, Re-label the LUN using format(1m) 
command., zpool import

Can you pls. help in that, what does that re-label mean ?  
(As I need to ask downtime for the zone now ... , would like to prepare for 
what I need to do )

I used format utility in thousands of times, for organizing partitions, though 
I have no idea how I would relabel a disk.
Also I did not use format to label the disks, I gave the LUN to zpool directly, 
I would not dare to touch or resize any partition with format utility, not 
knowing what zpool wants to see there.

Have you experienced such problem, and do you know how to increase zpool after 
a LUN increase ?

Thank you in advance,
Zsolt Habony




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Sašo Kiselkov
On 07/25/2012 05:49 PM, Habony, Zsolt wrote:
 Hello,
   There is a feature of zfs (autoexpand, or zpool online -e ) that it can 
 consume the increased LUN immediately and increase the zpool size.
 That would be a very useful ( vital ) feature in enterprise environment.
 
 Though when I tried to use it, it did not work.  LUN expanded and visible in 
 format, but zpool did not increase.
 I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an 
 Increase in LUN Size) 
 Bad luck. 
 
 Patch exists: 148098 but _not_ part of recommended patch set.  Thus my fresh 
 install Sol 10 U9 with latest patch set still has the problem.  ( Strange 
 that this problem 
  is not considered high impact ... )
 
 It mentiones a workaround :   zpool export, Re-label the LUN using 
 format(1m) command., zpool import
 
 Can you pls. help in that, what does that re-label mean ?  
 (As I need to ask downtime for the zone now ... , would like to prepare for 
 what I need to do )
 
 I used format utility in thousands of times, for organizing partitions, 
 though I have no idea how I would relabel a disk.
 Also I did not use format to label the disks, I gave the LUN to zpool 
 directly, I would not dare to touch or resize any partition with format 
 utility, not knowing what zpool wants to see there.
 
 Have you experienced such problem, and do you know how to increase zpool 
 after a LUN increase ?

Relabel means simply running the labeling command in the format
utility after you've made changes to the slices. As long as you keep the
start cluster of a slice the same and don't shrink it, nothing bad
should happen.

Are you doing this on a root pool?

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen

Hi--

Patches are available to fix this so I would suggest that you
request them from MOS support.

This fix fell through the cracks and we tried really hard to
get it in the current Solaris 10 release but sometimes things
don't work in your favor. The patches are available though.

Relabeling disks on a live pool is not a recommended practice
so let's review other options but first some questions:

1. Is this a redundant pool?

2. Do you have an additional LUN (equivalent size) that you
could use as a spare?

What you could do is replace this existing LUN with a larger
LUN, if available. Then, reattach the original LUN and detach
the spare LUN but this depends on your pool configuration.

If requesting the patches is not possible and you don't have
a spare LUN, then please contact me directly. I might be able
to walk you through a more manual process.

Thanks,

Cindy


On 07/25/12 09:49, Habony, Zsolt wrote:

Hello,
There is a feature of zfs (autoexpand, or zpool online -e ) that it can 
consume the increased LUN immediately and increase the zpool size.
That would be a very useful ( vital ) feature in enterprise environment.

Though when I tried to use it, it did not work.  LUN expanded and visible in 
format, but zpool did not increase.
I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase 
in LUN Size)
Bad luck.

Patch exists: 148098 but _not_ part of recommended patch set.  Thus my fresh 
install Sol 10 U9 with latest patch set still has the problem.  ( Strange that 
this problem
  is not considered high impact ... )

It mentiones a workaround :   zpool export, Re-label the LUN using format(1m) 
command., zpool import

Can you pls. help in that, what does that re-label mean ?
(As I need to ask downtime for the zone now ... , would like to prepare for 
what I need to do )

I used format utility in thousands of times, for organizing partitions, though I have no 
idea how I would relabel a disk.
Also I did not use format to label the disks, I gave the LUN to zpool directly, 
I would not dare to touch or resize any partition with format utility, not 
knowing what zpool wants to see there.

Have you experienced such problem, and do you know how to increase zpool after 
a LUN increase ?

Thank you in advance,
Zsolt Habony




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Habony, Zsolt
Thank you for your replies.

First, sorry for misleading info.  Patch 148098-03  indeed not included in 
recommended set, but trying to download it shows that 147440-15 obsoletes it
and 147440-19 is included in latest recommended patch set.
Thus time solves the problem elsewhere.

Just for fun, my case was:

A standard LUN used as a zfs filesystem, no redundancy (as storage already 
has), and no partition is used, disk is given directly to zpool.
# zpool status -oraarch
  pool: -oraarch
 state: ONLINE
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
xx-oraarch   ONLINE   0 0 0
  c5t60060E800570B90070B96547d0  ONLINE   0 0 0

errors: No known data errors

Partitioning shows this.  

partition pr
Current partition table (original):
Total disk sectors available: 41927902 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
  0usrwm   256  19.99GB 41927902
  1 unassignedwm 0  0  0
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  41927903   8.00MB 41944286


As I mentioned I did not partition it, zpool create did.  I had absolutely no 
idea how to resize these partitions, where to get the available number of 
sectors and how many should be skipped and reserved ...
Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , 
restored data.  

Partition looks like this now, I do not think I could have created it easily 
manually.

partition pr
Current partition table (original):
Total disk sectors available: 209700062 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm   256   99.99GB  209700062
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2097000638.00MB  209716446

Thank you for your help.
Zsolt Habony



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen

Hi--

I guess I can't begin to understand patching.

Yes, you provided a whole disk to zpool create but it actually
creates a part(ition) 0 as you can see in the output below.

Part  TagFlag First SectorSizeLast Sector
   0  usrwm   256  19.99GB  41927902

Part  TagFlag First Sector Size Last Sector
0 usrwm   256   99.99GB  209700062

I'm sorry you had to recreate the pool. This *is* a must-have feature
and it is working as designed in Solaris 11 and with patch 148098-3 (or
whatever the equivalent is) in Solaris 10 as well.

Maybe its time for me to recheck this feature in current Solaris 10
bits.

Thanks,

Cindy



On 07/25/12 16:14, Habony, Zsolt wrote:

Thank you for your replies.

First, sorry for misleading info.  Patch 148098-03  indeed not included in 
recommended set, but trying to download it shows that 147440-15 obsoletes it
and 147440-19 is included in latest recommended patch set.
Thus time solves the problem elsewhere.

Just for fun, my case was:

A standard LUN used as a zfs filesystem, no redundancy (as storage already 
has), and no partition is used, disk is given directly to zpool.
# zpool status -oraarch
   pool: -oraarch
  state: ONLINE
  scan: none requested
config:

 NAME STATE READ WRITE CKSUM
 xx-oraarch   ONLINE   0 0 0
   c5t60060E800570B90070B96547d0  ONLINE   0 0 0

errors: No known data errors

Partitioning shows this.

partition  pr
Current partition table (original):
Total disk sectors available: 41927902 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
   0usrwm   256  19.99GB 41927902
   1 unassignedwm 0  0  0
   2 unassignedwm 0  0  0
   3 unassignedwm 0  0  0
   4 unassignedwm 0  0  0
   5 unassignedwm 0  0  0
   6 unassignedwm 0  0  0
   8   reservedwm  41927903   8.00MB 41944286


As I mentioned I did not partition it, zpool create did.  I had absolutely no 
idea how to resize these partitions, where to get the available number of sectors and how 
many should be skipped and reserved ...
Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , 
restored data.

Partition looks like this now, I do not think I could have created it easily 
manually.

partition  pr
Current partition table (original):
Total disk sectors available: 209700062 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
   0usrwm   256   99.99GB  209700062
   1 unassignedwm 0   0   0
   2 unassignedwm 0   0   0
   3 unassignedwm 0   0   0
   4 unassignedwm 0   0   0
   5 unassignedwm 0   0   0
   6 unassignedwm 0   0   0
   8   reservedwm 2097000638.00MB  209716446

Thank you for your help.
Zsolt Habony




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss