Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hi Darren,

sorry that it took so long before I could answer.

The good thing:
I found out what went wrong.

What I did:
After resizing a Disk on the Storage, solaris recognizes it immediately.
Everytime you resize a disk, the EVA storage updates the discription which 
contains the size. So typing echo |format shows promptly the new discription.
So that seemed to work fine.

But It turned out that format doesn't automagically update the amount of 
cylinders/sectors.
So i tried to auto configure the disk.
When I did that, the first sector of partition 0 changed from sector # 34 to # 
256. (which makes label 0 and 1 of zpool unaccessible).
The last sector changed to the new end of the disk (which makes label 3 and 4 
of the zpool unaccessible). 
Partition 8 changed accordingly and correctly.
If I then relabel the disk the zpool is destroyed.
Label 0 and 1 of the zpool are gone and, obviously  label 2 and 3 too. 

What I did to solve it:
1. Changed the size of a disk/LUN on the storage
2. Verified the position of the starting sector of partition 0
3. Did an auto configure
4. Changed starting sector of partition 0 to the formerly starting sector from 
step #2
5. labeled the disk

I then was able to impoort the zpool

Do you know why the auto configure changes the starting cylinder ?
What I also found out is that it did not happen every time.
Sometimes the first sector of partition 0 changes, sometimes not.
Up to now I can't find any correlation between when and why.

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-09-21 Thread Sascha
Hej Richard.

think I'll update all our servers to the same version of zfs...
That will hopefully make sure that this doesn't happen again :-)

Darren and Richard: Thank you very much for your help !

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-13 Thread Sascha
Hi Darren,

took me a while what device is meant by zdb -l...

Original size was 20GB
After resizing in EVA format -e showed the new correct size:
  17. c6t6001438002A5435A0001006Dd0 HP-HSV210-6200-30.00GB
  /scsi_vhci/s...@g6001438002a5435a0001006d

Here is the output of zdb -l:
zdb -l /dev/dsk/c6t6001438002A5435A0001006Dd0s0

LABEL 0

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 1

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 2

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

LABEL 3

version=10
name='huhctmppool'
state=0
txg=14
pool_guid=8571912358322557497
hostid=2230083084
hostname='huh014'
top_guid=3763080418932148585
guid=3763080418932148585
vdev_tree
type='disk'
id=0
guid=3763080418932148585
path='/dev/dsk/c6t6001438002A5435A0001006Dd0s0'
devid='id1,s...@n6001438002a5435a0001006d/a'
phys_path='/scsi_vhci/s...@g6001438002a5435a0001006d:a'
whole_disk=1
metaslab_array=14
metaslab_shift=27
ashift=9
asize=21461467136
is_log=0

Is that the output you meant ?

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-12 Thread Sascha
Hi Darren,

thanks for your quick answer.

 On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha
 wrote:
  Then creating a zpool:
  zpool create -m /zones/huhctmp huhctmppool
 c6t6001438002A5435A0001005Ad0
  
  zpool list
  NAME  SIZE   USED  AVAILCAP  HEALTH
  ALTROOT
  huhctmppool  59.5G   103K  59.5G 0%  ONLINE  -
 I recall your original question was about a larger
 disk (1TB).

It wasn't me, I just have the same problem... ;-)

  My assumption is that the zpool create has created an
 EFI label here, but
 it would be nice to confirm with 'format' or
 'prtvtoc' output.

confirmed, it's really an EFI Label. (see below)

   format label
   [0] SMI Label
   [1] EFI Label
   Specify Label type[1]: 0
   Warning: This disk has an EFI label. Changing to SMI label will erase all
   current partitions.

BTW: Is there a smarter way to find out what Label is in place ?

 [skipping ahead]
 
  format type
  
 My fear is that with a small disk and not running
 'format -e' it's
 placed an SMI label on the disk.  Again, confirming
 that would be nice.
 The label in this section must match the label in the
 first section.  If
 they're both EFI, you should be good.  If it put an
 SMI label on, the
 data won't line up.  If that's all that's wrong, you
 can repeat this
 step on your lun and should still be good.  
 
 (use 'format -e' and when labeling it should prompt
 you for the label
 type).
 
  And in the end trying to import:
  zpool import huhctmppool
 
 Hmm, you've skipped the partitioning step (which
 should be done after
 applying the new label). 

Yes I skipped it, because I had no choice to choose the whole size.
There was only partition 0 and 8.
Even if select 0, delete the slice and newly errect it, it has the same size.
Also the number of disk sectors did not change and slice 8 has still it's 
starting cylinder at the old end of the disk (See below)

  format label
  [0] SMI Label
  [1] EFI Label
  Specify Label type[1]:
  Ready to label disk, continue? yes
  
 partition p
 Current partition table (original):
 Total disk sectors available: 146784222 + 16384 (reserved sectors)

 Part  TagFlag First Sector Size Last Sector
   0usrwm   256   69.99GB  146784222
   1 unassignedwm 0   0   0
   2 unassignedwm 0   0   0
   3 unassignedwm 0   0   0
   4 unassignedwm 0   0   0
   5 unassignedwm 0   0   0
   6 unassignedwm 0   0   0
   7 unassignedwm 0   0   0
   8   reservedwm 1467842238.00MB  146800606



 I would make sure that
 slice 0 encompasses the
 full 70G.  Basically, make the partitioning here look
 like it used to,
 but with more data in the slice.

I would like to, but how ?

 So
 #1.  Confirm you have EFI labels before and after
 running format

Ok, I can comfirm that.

 #2.  Partition the disk after the label to look like
 it used to (but
  with a larger slice)

Well, that dosn't seem to work :-(

Sascha

 Darren
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-12 Thread Sascha
Darren, I want to give you a short overview of what I tried:

1.
created a zpool on a LUN
resized the LUN on the EVA
exported the zpool
used format -e and label
tried to enlarge slice 0 - impossible  (see posting above)

2. 
Same like 1. but exported the zpool before resizing on the EVA
Same result...

3.
Created a zpool on a LUN
exported the LUN
resized the LUN on the EVA
relabled the LUN with EFI Label
made an auto-reconfigure
The partition tool showed the correct size and number of cylinders
Unfortunatly, I could not import the zpool (no zpool definded)

4. 
Tried several variations of 3., but ended always up with no zpool during import.

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-08-11 Thread Sascha
Hi Darren,

i tried exactly the same, but it doesn't seem to work.

First the size of the disk:
[b] echo |format -e|grep -i 05A[/b]  17. 
c6t6001438002A5435A0001005Ad0 HP-HSV210-6200-60.00GB
  /scsi_vhci/s...@g6001438002a5435a0001005a

Then creating a zpool:
[b]zpool create -m /zones/huhctmp huhctmppool 
c6t6001438002A5435A0001005Ad0[/b]

[b]zpool list[/b]
NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
huhctmppool  59.5G   103K  59.5G 0%  ONLINE  -

Exporting the zpool:
[b]zpool export huhctmppool[/b]

Then a resize on the EVA, which gives 70GB instead of 60GB:
[b]echo |format -e|grep -i 05A[/b]  17. 
c6t6001438002A5435A0001005Ad0 HP-HSV210-6200-70.00GB
  /scsi_vhci/s...@g6001438002a5435a0001005a

Now labeling and do the autoconfiguration:
format type


AVAILABLE DRIVE TYPES:
0. Auto configure
1. other
Specify disk type (enter its number)[1]: 0
c6t6001438002A5435A0001005Ad0: configured with capacity of 70.00GB
HP-HSV210-6200-70.00GB
selecting c6t6001438002A5435A0001005Ad0
[disk formatted]

And in the end trying to import:
zpool import huhctmppool
cannot import 'huhctmppool': no such pool available

Anything else I can try ?
BTW: ZFS is Version 10, Solaris 5/08

Thanks,

Sascha
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor zfs performance on my home server

2007-02-13 Thread Sascha Brechenmacher


Am 13.02.2007 um 22:46 schrieb Ian Collins:


[EMAIL PROTECTED] wrote:


Hello,

I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and now I thought about swtching back to Debian. First of all the
hardware layout of my home server:

Mainboard: Asus A7V8X-X
CPU: AthlonXP 2400+
Memory: 1.5GB
Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB
(SATA-1, c2d0,c2d1,c3d0,c3d1)
SATA Controller: SIL3114 (downgraded to the IDE-FW)
Solaris nv_54

Than I compiled the newest Version of bonnie++ and do some benchmarks
first on an ZFS Mirror (/data/) created with
the 250GB IDE disk:

$ ./bonnie++ -d /data/ -s 4G -u root
Using uid:0, gid:0.
Version  1.03   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --  
Block--

--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
CP  /sec %CP
  4G 17832  25 17013  33  4630  12 21778  38
26839  11  66.0   2

Looks like poor hardware, how was the pool built?  Did you give ZFS  
the

entire drive?

On my nForce4 Athlon64 box with two 250G SATA drives,

zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirrorONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0

Version  1.03   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
bester   4G 45036  21 47972   8 32570   5 83134  80 97646  12
253.9   0

dd from the mirror gives about 77MB/s

Ian.



I use the entire drive for the zpools:

  pool: data
state: ONLINE
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  mirrorONLINE   0 0 0
c1d0ONLINE   0 0 0
c1d1ONLINE   0 0 0

errors: No known data errors

  pool: srv
state: ONLINE
scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
srv ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0
c3d0ONLINE   0 0 0
c3d1ONLINE   0 0 0

how could I dd from the zpool's, where is the blockdevice?

sascha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss