Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hi Darren,

sorry that it took so long before I could answer.

The good thing:
I found out what went wrong.

What I did:
After resizing a Disk on the Storage, solaris recognizes it immediately.
Everytime you resize a disk, the EVA storage updates the discription which 
contains the size. So typing echo |format shows promptly the new discription.
So that seemed to work fine.

But It turned out that format doesn't automagically update the amount of 
cylinders/sectors.
So i tried to auto configure the disk.
When I did that, the first sector of partition 0 changed from sector # 34 to # 
256. (which makes label 0 and 1 of zpool unaccessible).
The last sector changed to the new end of the disk (which makes label 3 and 4 
of the zpool unaccessible). 
Partition 8 changed accordingly and correctly.
If I then relabel the disk the zpool is destroyed.
Label 0 and 1 of the zpool are gone and, obviously  label 2 and 3 too. 

What I did to solve it:
1. Changed the size of a disk/LUN on the storage
2. Verified the position of the starting sector of partition 0
3. Did an auto configure
4. Changed starting sector of partition 0 to the formerly starting sector from 
step #2
5. labeled the disk

I then was able to impoort the zpool

Do you know why the auto configure changes the starting cylinder ?
What I also found out is that it did not happen every time.
Sometimes the first sector of partition 0 changes, sometimes not.
Up to now I can't find any correlation between when and why.

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Richard Elling

On Sep 21, 2009, at 8:59 AM, Sascha wrote:


Hi Darren,

sorry that it took so long before I could answer.

The good thing:
I found out what went wrong.

What I did:
After resizing a Disk on the Storage, solaris recognizes it  
immediately.
Everytime you resize a disk, the EVA storage updates the discription  
which contains the size. So typing echo |format shows promptly the  
new discription.

So that seemed to work fine.

But It turned out that format doesn't automagically update the  
amount of cylinders/sectors.

So i tried to auto configure the disk.
When I did that, the first sector of partition 0 changed from sector  
# 34 to # 256. (which makes label 0 and 1 of zpool unaccessible).


This was a change made long ago, but it finally caught up with you.
You must have created the original EFI label with an older version of
[Open]Solaris. If you then relabel with automatic settings, the starting
sector will change to the 256 value, which is a much better starting  
point.

 -- richard

The last sector changed to the new end of the disk (which makes  
label 3 and 4 of the zpool unaccessible).

Partition 8 changed accordingly and correctly.
If I then relabel the disk the zpool is destroyed.
Label 0 and 1 of the zpool are gone and, obviously  label 2 and 3 too.

What I did to solve it:
1. Changed the size of a disk/LUN on the storage
2. Verified the position of the starting sector of partition 0
3. Did an auto configure
4. Changed starting sector of partition 0 to the formerly starting  
sector from step #2

5. labeled the disk

I then was able to impoort the zpool

Do you know why the auto configure changes the starting cylinder ?
What I also found out is that it did not happen every time.
Sometimes the first sector of partition 0 changes, sometimes not.
Up to now I can't find any correlation between when and why.

Sascha
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hej Richard.

think I'll update all our servers to the same version of zfs...
That will hopefully make sure that this doesn't happen again :-)

Darren and Richard: Thank you very much for your help !

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-13 Thread Sascha
Hi Darren,

took me a while what device is meant by zdb -l...

Original size was 20GB
After resizing in EVA format -e showed the new correct size:
  17. c6t6001438002A5435A0001006Dd0 HP-HSV210-6200-30.00GB
  /scsi_vhci/s...@g6001438002a5435a0001006d

Here is the output of zdb -l:
zdb -l /dev/dsk/c6t6001438002A5435A0001006Dd0s0

LABEL 0

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 1

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 2

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 3

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

Is that the output you meant ?

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-12 Thread Sascha
Hi Darren,

thanks for your quick answer.

 On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha
 wrote:
  Then creating a zpool:
  zpool create -m /zones/huhctmp huhctmppool
 c6t6001438002A5435A0001005Ad0
  
  zpool list
  NAME  SIZE   USED  AVAILCAP  HEALTH
  ALTROOT
  huhctmppool  59.5G   103K  59.5G 0%  ONLINE  -
 I recall your original question was about a larger
 disk (1TB).

It wasn't me, I just have the same problem... ;-)

  My assumption is that the zpool create has created an
 EFI label here, but
 it would be nice to confirm with 'format' or
 'prtvtoc' output.

confirmed, it's really an EFI Label. (see below)

   format label
   [0] SMI Label
   [1] EFI Label
   Specify Label type[1]: 0
   Warning: This disk has an EFI label. Changing to SMI label will erase all
   current partitions.

BTW: Is there a smarter way to find out what Label is in place ?

 [skipping ahead]
 
  format type
  
 My fear is that with a small disk and not running
 'format -e' it's
 placed an SMI label on the disk.  Again, confirming
 that would be nice.
 The label in this section must match the label in the
 first section.  If
 they're both EFI, you should be good.  If it put an
 SMI label on, the
 data won't line up.  If that's all that's wrong, you
 can repeat this
 step on your lun and should still be good.  
 
 (use 'format -e' and when labeling it should prompt
 you for the label
 type).
 
  And in the end trying to import:
  zpool import huhctmppool
 
 Hmm, you've skipped the partitioning step (which
 should be done after
 applying the new label). 

Yes I skipped it, because I had no choice to choose the whole size.
There was only partition 0 and 8.
Even if select 0, delete the slice and newly errect it, it has the same size.
Also the number of disk sectors did not change and slice 8 has still it's 
starting cylinder at the old end of the disk (See below)

  format label
  [0] SMI Label
  [1] EFI Label
  Specify Label type[1]:
  Ready to label disk, continue? yes
  
 partition p
 Current partition table (original):
 Total disk sectors available: 146784222 + 16384 (reserved sectors)

 Part  TagFlag First Sector Size Last Sector
   0usrwm   256   69.99GB  146784222
   1 unassignedwm 0   0   0
   2 unassignedwm 0   0   0
   3 unassignedwm 0   0   0
   4 unassignedwm 0   0   0
   5 unassignedwm 0   0   0
   6 unassignedwm 0   0   0
   7 unassignedwm 0   0   0
   8   reservedwm 1467842238.00MB  146800606



 I would make sure that
 slice 0 encompasses the
 full 70G.  Basically, make the partitioning here look
 like it used to,
 but with more data in the slice.

I would like to, but how ?

 So
 #1.  Confirm you have EFI labels before and after
 running format

Ok, I can comfirm that.

 #2.  Partition the disk after the label to look like
 it used to (but
  with a larger slice)

Well, that dosn't seem to work :-(

Sascha

 Darren
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-12 Thread Sascha
Darren, I want to give you a short overview of what I tried:

1.
created a zpool on a LUN
resized the LUN on the EVA
exported the zpool
used format -e and label
tried to enlarge slice 0 - impossible  (see posting above)

2. 
Same like 1. but exported the zpool before resizing on the EVA
Same result...

3.
Created a zpool on a LUN
exported the LUN
resized the LUN on the EVA
relabled the LUN with EFI Label
made an auto-reconfigure
The partition tool showed the correct size and number of cylinders
Unfortunatly, I could not import the zpool (no zpool definded)

4. 
Tried several variations of 3., but ended always up with no zpool during import.

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-12 Thread A Darren Dunham
On Wed, Aug 12, 2009 at 04:53:20AM -0700, Sascha wrote:
 confirmed, it's really an EFI Label. (see below)
 
format label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
 
 BTW: Is there a smarter way to find out what Label is in place ?

Take a look at partitions.  If you have 0-7 or 0-15, you have an SMI
label.  If you have 0-6 and 8 (7 is skipped), then you have an EFI
label.

  Hmm, you've skipped the partitioning step (which
  should be done after
  applying the new label). 
 
 Yes I skipped it, because I had no choice to choose the whole size.
 There was only partition 0 and 8.
 Even if select 0, delete the slice and newly errect it, it has the same size.
 Also the number of disk sectors did not change and slice 8 has still it's 
 starting cylinder at the old end of the disk (See below)
 
   format label
   [0] SMI Label
   [1] EFI Label
   Specify Label type[1]:
   Ready to label disk, continue? yes
   
  partition p
  Current partition table (original):
  Total disk sectors available: 146784222 + 16384 (reserved sectors)
 
  Part  TagFlag First Sector Size Last Sector
0usrwm   256   69.99GB  146784222

Okay, you had done that, you just hadn't confirmed that in the first
email.

Assuming the first sector is the same before and after, this should all
be correct.

Can you do zdb -l diskname after the resize?  Do you get any data?

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-11 Thread Sascha
Hi Darren,

i tried exactly the same, but it doesn't seem to work.

First the size of the disk:
[b] echo |format -e|grep -i 05A[/b]  17. 
c6t6001438002A5435A0001005Ad0 HP-HSV210-6200-60.00GB
  /scsi_vhci/s...@g6001438002a5435a0001005a

Then creating a zpool:
[b]zpool create -m /zones/huhctmp huhctmppool 
c6t6001438002A5435A0001005Ad0[/b]

[b]zpool list[/b]
NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
huhctmppool  59.5G   103K  59.5G 0%  ONLINE  -

Exporting the zpool:
[b]zpool export huhctmppool[/b]

Then a resize on the EVA, which gives 70GB instead of 60GB:
[b]echo |format -e|grep -i 05A[/b]  17. 
c6t6001438002A5435A0001005Ad0 HP-HSV210-6200-70.00GB
  /scsi_vhci/s...@g6001438002a5435a0001005a

Now labeling and do the autoconfiguration:
format type


AVAILABLE DRIVE TYPES:
0. Auto configure
1. other
Specify disk type (enter its number)[1]: 0
c6t6001438002A5435A0001005Ad0: configured with capacity of 70.00GB
HP-HSV210-6200-70.00GB
selecting c6t6001438002A5435A0001005Ad0
[disk formatted]

And in the end trying to import:
zpool import huhctmppool
cannot import 'huhctmppool': no such pool available

Anything else I can try ?
BTW: ZFS is Version 10, Solaris 5/08

Thanks,

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-11 Thread A Darren Dunham
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote:
 Then creating a zpool:
 [b]zpool create -m /zones/huhctmp huhctmppool 
 c6t6001438002A5435A0001005Ad0[/b]
 
 [b]zpool list[/b]
 NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
 huhctmppool  59.5G   103K  59.5G 0%  ONLINE  -

I recall your original question was about a larger disk (1TB).  My
assumption is that the zpool create has created an EFI label here, but
it would be nice to confirm with 'format' or 'prtvtoc' output.

[skipping ahead]

 format type

 AVAILABLE DRIVE TYPES:
 0. Auto configure
 1. other
 Specify disk type (enter its number)[1]: 0
 c6t6001438002A5435A0001005Ad0: configured with capacity of 70.00GB
 HP-HSV210-6200-70.00GB
 selecting c6t6001438002A5435A0001005Ad0
 [disk formatted]

My fear is that with a small disk and not running 'format -e' it's
placed an SMI label on the disk.  Again, confirming that would be nice.
The label in this section must match the label in the first section.  If
they're both EFI, you should be good.  If it put an SMI label on, the
data won't line up.  If that's all that's wrong, you can repeat this
step on your lun and should still be good.  

(use 'format -e' and when labeling it should prompt you for the label
type).

 And in the end trying to import:
 zpool import huhctmppool

Hmm, you've skipped the partitioning step (which should be done after
applying the new label).  I would make sure that slice 0 encompasses the
full 70G.  Basically, make the partitioning here look like it used to,
but with more data in the slice.

So
#1.  Confirm you have EFI labels before and after running format
#2.  Partition the disk after the label to look like it used to (but
 with a larger slice)

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-03 Thread Jan
Hi Darren,

thanks for your reply.

 What did you try?
 Since you're larger than 1T, you certainly have an EFI label. What you
 have to do is destroy the existing EFI label, then have format create a
 new one for the larger LUN. Finally, create slice 0 as the size of the
 entire (now larger) disk.

Yes, I have an EFI label on that device.
This is my procedure to try growing the capacity of the device:
- export the zpool
- overwrite the existing EFI label with format tool
- auto-configure it
- import the zpool

What do you mean with then have format create a
new one for the larger LUN. Finally, create slice 0 as the size of the
entire (now larger) disk.?

Could you please give me some more detailed information on your description?

Many thanks,

jan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-03 Thread A Darren Dunham
On Mon, Aug 03, 2009 at 01:15:49PM -0700, Jan wrote:

 Yes, I have an EFI label on that device.
 This is my procedure to try growing the capacity of the device:
 - export the zpool
 - overwrite the existing EFI label with format tool
 - auto-configure it
 - import the zpool
 
 What do you mean with then have format create a
 new one for the larger LUN. Finally, create slice 0 as the size of the
 entire (now larger) disk.?
 
 Could you please give me some more detailed information on your description?

I typed this up in a thread some months ago.  One of the posts has some
more detail.  

http://groups.google.com/group/comp.unix.solaris/browse_thread/thread/b3c996c76c3de860

Does that help?

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-07-30 Thread A Darren Dunham
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
 Hi all,
 I need to know if it is possible to expand the capacity of a zpool
 without loss of data by growing the LUN (2TB) presented from an HP EVA
 to a Solaris 10 host.

Yes.

 I know that there is a possible way in Solaris Express Community
 Edition, b117 with the autoexpand property. But I still work with
 Solaris 10 U7. Besides, when will this feature be integrated in
 Solaris 10?

Not sure.

 Is there a workaround? I have checked it out with format tool -
 without effects.

What did you try?  

Since you're larger than 1T, you certainly have an EFI label.  What you
have to do is destroy the existing EFI label, then have format create a
new one for the larger LUN.  Finally, create slice 0 as the size of the
entire (now larger) disk.

There are four ZFS labels inside the EFI data slice.  Two at front, two
at end.  After enlarging, it probably won't be able to find the end two,
but it should import just fine (and will then write new labels at the
end).

As always, if you haven't done this before, you'll want to test it and
make a backup before trying on live data.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] resizing zpools by growing LUN

2009-07-29 Thread Jan
Hi all,
I need to know if it is possible to expand the capacity of a zpool without loss 
of data by growing the LUN (2TB) presented from an HP EVA to a Solaris 10 host.

I know that there is a possible way in Solaris Express Community Edition, b117 
with the autoexpand property. But I still work with Solaris 10 U7. Besides, 
when will this feature be integrated in Solaris 10?

Is there a workaround? I have checked it out with format tool - without effects.

Thanks for any info.

Jan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss