[zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Hello,

I'm currently replacing a temporary storage server (server1) with the 
one that should be the final one (server2). To keep the data storage 
from the old one I'm attempting to import it on the new server. Both 
servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0 LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 
sec 126

  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 ST3000DM- W1F07HZ-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0

(c5d1 was previously used as a hot spare, but I removed it as an attempt 
to export and import the zpool without the spare)


# zpool export storage

# zpool list
(shows only rpool)

# zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE

(check to see if it is importable to the old server, this has also been 
verified since I moved back the disks to the old server yesterday to 
have it available during the night)


zdb -l output in attached files.

---

Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE

The problem is that all the disks are there and online, but the pool is 
showing up as unavailable.


Any ideas on what I can do more in order to solve this problem ?

Regards,
  PeO



# zdb -l c4d0s0

LABEL 0

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14478395923793210190
path: '/dev/dsk/c4d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F07HW4/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0:a'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 9273576080530492359
path: '/dev/dsk/c4d1s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F05H2Y/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0:a'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6205751126661365015
path: '/dev/dsk/c5d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F032RJ/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0:a'
whole_disk: 1
create_txg: 4

LABEL 1

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576

Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
are the disk/sas controller the same on both server?
-LT

Sent from my iPad

On Mar 13, 2012, at 6:10, P-O Yliniemi p...@bsd-guide.net wrote:

 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the old 
 one I'm attempting to import it on the new server. Both servers are running 
 OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0 LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 ST3000DM- W1F07HZ-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
 
 (c5d1 was previously used as a hot spare, but I removed it as an attempt to 
 export and import the zpool without the spare)
 
 # zpool export storage
 
 # zpool list
 (shows only rpool)
 
 # zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE
 
 (check to see if it is importable to the old server, this has also been 
 verified since I moved back the disks to the old server yesterday to have it 
 available during the night)
 
 zdb -l output in attached files.
 
 ---
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 The problem is that all the disks are there and online, but the pool is 
 showing up as unavailable.
 
 Any ideas on what I can do more in order to solve this problem ?
 
 Regards,
  PeO
 
 
 
 zdb_l_c4d0s0.txt
 zdb_l_c4d1s0.txt
 zdb_l_c5d0s0.txt
 zdb_l_c5d1s0.txt
 zdb_l_c7t5000C50044A30193d0s0.txt
 zdb_l_c7t5000C50044E0F316d0s0.txt
 zdb_l_c7t5000C50044760F6Ed0s0.txt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any recommendations on Perc H700 controller

2012-03-13 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Cooper Hubbell
 
 Regarding the writeback cache on disks, what is the recommended method
 to disable the cache?  Through HBA firmware, or via Solaris?

It's the same either way - You press Ctrl-R (or whatever) during bootup and 
disable the writeback in the perc BIOS...  Or you use MegaCLI, which does the 
same thing from within the OS.

But again, I would only recommend disabling the writeback if you have either a 
dedicated SSD log device (or equivalent or faster), ... Or if you disable the 
ZIL completely (sync=disabled property).  Because if you actually have ZIL and 
use spindle disks for ZIL, then the writeback cache is a big improvement over 
nothing.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Jim Klimov

2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:

hi
are the disk/sas controller the same on both server?


Seemingly no. I don't see the output of format on Server2,
but for Server1 I see that the 3TB disks are used as IDE
devices (probably with motherboard SATA-IDE emulation?)
while on Server2 addressing goes like SAS with WWN names.

It may be possible that on one controller disks are used
natively while on another they are attached as a JBOD
or a set of RAID0 disks (so the controller's logic or its
expected layout intervenes), as recently discussed on-list?


On Mar 13, 2012, at 6:10, P-O Yliniemip...@bsd-guide.net  wrote:


Hello,

I'm currently replacing a temporary storage server (server1) with the one that 
should be the final one (server2). To keep the data storage from the old one 
I'm attempting to import it on the new server. Both servers are running 
OpenIndiana server build 151a.

Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0



Server 2 (new)
I have attached the disks on the new server in the same order (which shouldn't 
matter as ZFS should locate the disks anyway)
zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any recommendations on Perc H700 controller

2012-03-13 Thread Cooper Hubbell
My system is currently using LSI 9211 HBAs with Crucial M4 SSDs for
ZIL/L2ARC.  I have used LSIUTIL v1.63 to disable the write cache on the two
controllers with only SATA HDDs, though my third controller has a
combination of HDD and SSD as shown:

SAS2008's links are down, down, 6.0 G, 6.0 G, 3.0 G, down, 3.0 G, 3.0 G

  B___T___L  Type   Vendor   Product  Rev  PhyNum
  0   9   0  Disk   ATA  M4-CT064M4SSD2   0309   2
  0  10   0  Disk   ATA  M4-CT064M4SSD2   0309   3
  0  11   0  Disk   ATA  ST3500320NS  SN06   4
  0  12   0  Disk   ATA  ST3500320NS  SN06   6
  0  13   0  Disk   ATA  ST3500320NS  SN06   7


Unfortunately, the SATA write cache can only be turned on or off at the
controller level with LSIUTIL.  In a case where the SSD ZIL and L2ARC are
on the same controller as pool HDDs, is it still recommended that the cache
be disabled?  My other thought is that the cache to which LSIUTIL is
referring is located on the controller and that the write cache on the
individual disks is still active.  This excerpt shows the interactive
prompts of LSIUTIL during and after disabling the cache:

Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 14

 Multi-pathing:  [0=Disabled, 1=Enabled, default is 0]
 SATA Native Command Queuing:  [0=Disabled, 1=Enabled, default is 1]
 SATA Write Caching:  [0=Disabled, 1=Enabled, default is 1] 0

 Main menu, select an option:  [1-99 or e/p/w or 0 to quit] 68

 Current Port State
 --
 SAS2008's links are down, down, 6.0 G, 6.0 G, 3.0 G, down, 3.0 G, 3.0 G

 Software Version Information
 
 Current active firmware version is 0b00 (11.00.00)
 Firmware image's version is MPTFW-11.00.00.00-IT
   LSI Logic
   Not Packaged Yet
 x86 BIOS image's version is MPT2BIOS-7.21.00.00 (2011.08.11)

 Firmware Settings
 -
 SAS WWID:   stripped
 Multi-pathing:  Disabled
 SATA Native Command Queuing:Enabled
 SATA Write Caching: Disabled
 SATA Maximum Queue Depth:   32
 SAS Max Queue Depth, Narrow:0
 SAS Max Queue Depth, Wide:  0
 Device Missing Report Delay:0 seconds
 Device Missing I/O Delay:   0 seconds
 Phy Parameters for Phynum:  01234567
   Link Enabled: Yes  Yes  Yes  Yes  Yes  Yes  Yes  Yes
   Link Min Rate:1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5
   Link Max Rate:6.0  6.0  6.0  6.0  6.0  6.0  6.0  6.0
   SSP Initiator Enabled:Yes  Yes  Yes  Yes  Yes  Yes  Yes  Yes
   SSP Target Enabled:   No   No   No   No   No   No   No   No
   Port Configuration:   Auto Auto Auto Auto Auto Auto Auto Auto
 Interrupt Coalescing:   Enabled, timeout is 10 us, depth is 4


Thank you!


On Tue, Mar 13, 2012 at 7:58 AM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Cooper Hubbell
 
  Regarding the writeback cache on disks, what is the recommended method
  to disable the cache?  Through HBA firmware, or via Solaris?

 It's the same either way - You press Ctrl-R (or whatever) during bootup
 and disable the writeback in the perc BIOS...  Or you use MegaCLI, which
 does the same thing from within the OS.

 But again, I would only recommend disabling the writeback if you have
 either a dedicated SSD log device (or equivalent or faster), ... Or if you
 disable the ZIL completely (sync=disabled property).  Because if you
 actually have ZIL and use spindle disks for ZIL, then the writeback cache
 is a big improvement over nothing.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Jim Klimov skrev 2012-03-13 15:24:

2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:

hi
are the disk/sas controller the same on both server?


Seemingly no. I don't see the output of format on Server2,
but for Server1 I see that the 3TB disks are used as IDE
devices (probably with motherboard SATA-IDE emulation?)
while on Server2 addressing goes like SAS with WWN names.


Correct, the servers are all different.
Server1 is a HP xw8400, and the disks are connected to the first four 
SATA ports (the xw8400 has both SAS and SATA ports, of which I use the 
SAS ports for the system disks).
On Server2, the disk controller used for the data disks is a LSI SAS 
9211-8i, updated with the latest IT-mode firmware (also tested with the 
original IR-mode firmware)


The output of the 'format' command on Server2 is:

AVAILABLE DISK SELECTIONS:
   0. c2t0d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
   1. c2t1d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
   2. c3d1 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c4d0 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
   4. c7t5000C5003F45CCF4d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c5003f45ccf4
   5. c7t5000C50044E0F0C6d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f0c6
   6. c7t5000C50044E0F611d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f611

Note that this is what it looks like now, not at the time I sent the 
question. The difference is that I have set up three other disks (items 
4-6) on the new server, and are currently transferring the contents from 
Server1 to this one using zfs send/receive.


I will probably be able to reconnect the correct disks to the Server2 
tomorrow when the data has been transferred to the new disks (problem 
'solved' at that moment), if there is anything else that I can do to try 
to solve it the 'right' way.



It may be possible that on one controller disks are used
natively while on another they are attached as a JBOD
or a set of RAID0 disks (so the controller's logic or its
expected layout intervenes), as recently discussed on-list?

On the HP, on a reboot, I was reminded that the 3TB disks were displayed 
as 800GB-something by the BIOS (although correctly identified by 
OpenIndiana and ZFS). This could be a part of the problem with the 
ability to export/import the pool.



On Mar 13, 2012, at 6:10, P-O Yliniemip...@bsd-guide.net  wrote:


Hello,

I'm currently replacing a temporary storage server (server1) with 
the one that should be the final one (server2). To keep the data 
storage from the old one I'm attempting to import it on the new 
server. Both servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 
sec 126
  
/pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0

   1. c4d0ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0



Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-sheng Tsao
IMHO
Zfs is smart but not smart when you deal with two different controller


Sent from my iPhone

On Mar 13, 2012, at 3:32 PM, P-O Yliniemi p...@bsd-guide.net wrote:

 Jim Klimov skrev 2012-03-13 15:24:
 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:
 hi
 are the disk/sas controller the same on both server?
 
 Seemingly no. I don't see the output of format on Server2,
 but for Server1 I see that the 3TB disks are used as IDE
 devices (probably with motherboard SATA-IDE emulation?)
 while on Server2 addressing goes like SAS with WWN names.
 
 Correct, the servers are all different.
 Server1 is a HP xw8400, and the disks are connected to the first four SATA 
 ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports 
 for the system disks).
 On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, 
 updated with the latest IT-mode firmware (also tested with the original 
 IR-mode firmware)
 
 The output of the 'format' command on Server2 is:
 
 AVAILABLE DISK SELECTIONS:
   0. c2t0d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
   1. c2t1d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
   2. c3d1 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c4d0 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
   4. c7t5000C5003F45CCF4d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c5003f45ccf4
   5. c7t5000C50044E0F0C6d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f0c6
   6. c7t5000C50044E0F611d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f611
 
 Note that this is what it looks like now, not at the time I sent the 
 question. The difference is that I have set up three other disks (items 4-6) 
 on the new server, and are currently transferring the contents from Server1 
 to this one using zfs send/receive.
 
 I will probably be able to reconnect the correct disks to the Server2 
 tomorrow when the data has been transferred to the new disks (problem 
 'solved' at that moment), if there is anything else that I can do to try to 
 solve it the 'right' way.
 
 It may be possible that on one controller disks are used
 natively while on another they are attached as a JBOD
 or a set of RAID0 disks (so the controller's logic or its
 expected layout intervenes), as recently discussed on-list?
 
 On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 
 800GB-something by the BIOS (although correctly identified by OpenIndiana and 
 ZFS). This could be a part of the problem with the ability to export/import 
 the pool.
 
 On Mar 13, 2012, at 6:10, P-O Yliniemip...@bsd-guide.net  wrote:
 
 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the 
 old one I'm attempting to import it on the new server. Both servers are 
 running OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss