Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-sheng Tsao
IMHO
Zfs is smart but not smart when you deal with two different controller


Sent from my iPhone

On Mar 13, 2012, at 3:32 PM, P-O Yliniemi  wrote:

> Jim Klimov skrev 2012-03-13 15:24:
>> 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:
>>> hi
>>> are the disk/sas controller the same on both server?
>> 
>> Seemingly no. I don't see the output of "format" on Server2,
>> but for Server1 I see that the 3TB disks are used as IDE
>> devices (probably with motherboard SATA-IDE emulation?)
>> while on Server2 addressing goes like SAS with WWN names.
>> 
> Correct, the servers are all different.
> Server1 is a HP xw8400, and the disks are connected to the first four SATA 
> ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports 
> for the system disks).
> On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, 
> updated with the latest IT-mode firmware (also tested with the original 
> IR-mode firmware)
> 
> The output of the 'format' command on Server2 is:
> 
> AVAILABLE DISK SELECTIONS:
>   0. c2t0d0 
>  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
>   1. c2t1d0 
>  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
>   2. c3d1 
>  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
>   3. c4d0 
>  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
>   4. c7t5000C5003F45CCF4d0 
>  /scsi_vhci/disk@g5000c5003f45ccf4
>   5. c7t5000C50044E0F0C6d0 
>  /scsi_vhci/disk@g5000c50044e0f0c6
>   6. c7t5000C50044E0F611d0 
>  /scsi_vhci/disk@g5000c50044e0f611
> 
> Note that this is what it looks like now, not at the time I sent the 
> question. The difference is that I have set up three other disks (items 4-6) 
> on the new server, and are currently transferring the contents from Server1 
> to this one using zfs send/receive.
> 
> I will probably be able to reconnect the correct disks to the Server2 
> tomorrow when the data has been transferred to the new disks (problem 
> 'solved' at that moment), if there is anything else that I can do to try to 
> solve it the 'right' way.
> 
>> It may be possible that on one controller disks are used
>> "natively" while on another they are attached as a JBOD
>> or a set of RAID0 disks (so the controller's logic or its
>> expected layout intervenes), as recently discussed on-list?
>> 
> On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 
> 800GB-something by the BIOS (although correctly identified by OpenIndiana and 
> ZFS). This could be a part of the problem with the ability to export/import 
> the pool.
> 
>>> On Mar 13, 2012, at 6:10, P-O Yliniemi  wrote:
>>> 
 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the 
 old one I'm attempting to import it on the new server. Both servers are 
 running OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
>> 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Jim Klimov skrev 2012-03-13 15:24:

2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:

hi
are the disk/sas controller the same on both server?


Seemingly no. I don't see the output of "format" on Server2,
but for Server1 I see that the 3TB disks are used as IDE
devices (probably with motherboard SATA-IDE emulation?)
while on Server2 addressing goes like SAS with WWN names.


Correct, the servers are all different.
Server1 is a HP xw8400, and the disks are connected to the first four 
SATA ports (the xw8400 has both SAS and SATA ports, of which I use the 
SAS ports for the system disks).
On Server2, the disk controller used for the data disks is a LSI SAS 
9211-8i, updated with the latest IT-mode firmware (also tested with the 
original IR-mode firmware)


The output of the 'format' command on Server2 is:

AVAILABLE DISK SELECTIONS:
   0. c2t0d0 
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
   1. c2t1d0 
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
   2. c3d1 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c4d0 
  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
   4. c7t5000C5003F45CCF4d0 
  /scsi_vhci/disk@g5000c5003f45ccf4
   5. c7t5000C50044E0F0C6d0 
  /scsi_vhci/disk@g5000c50044e0f0c6
   6. c7t5000C50044E0F611d0 
  /scsi_vhci/disk@g5000c50044e0f611

Note that this is what it looks like now, not at the time I sent the 
question. The difference is that I have set up three other disks (items 
4-6) on the new server, and are currently transferring the contents from 
Server1 to this one using zfs send/receive.


I will probably be able to reconnect the correct disks to the Server2 
tomorrow when the data has been transferred to the new disks (problem 
'solved' at that moment), if there is anything else that I can do to try 
to solve it the 'right' way.



It may be possible that on one controller disks are used
"natively" while on another they are attached as a JBOD
or a set of RAID0 disks (so the controller's logic or its
expected layout intervenes), as recently discussed on-list?

On the HP, on a reboot, I was reminded that the 3TB disks were displayed 
as 800GB-something by the BIOS (although correctly identified by 
OpenIndiana and ZFS). This could be a part of the problem with the 
ability to export/import the pool.



On Mar 13, 2012, at 6:10, P-O Yliniemi  wrote:


Hello,

I'm currently replacing a temporary storage server (server1) with 
the one that should be the final one (server2). To keep the data 
storage from the old one I'm attempting to import it on the new 
server. Both servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0sec 126>
  
/pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0

   1. c4d0
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0



Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Jim Klimov

2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:

hi
are the disk/sas controller the same on both server?


Seemingly no. I don't see the output of "format" on Server2,
but for Server1 I see that the 3TB disks are used as IDE
devices (probably with motherboard SATA-IDE emulation?)
while on Server2 addressing goes like SAS with WWN names.

It may be possible that on one controller disks are used
"natively" while on another they are attached as a JBOD
or a set of RAID0 disks (so the controller's logic or its
expected layout intervenes), as recently discussed on-list?


On Mar 13, 2012, at 6:10, P-O Yliniemi  wrote:


Hello,

I'm currently replacing a temporary storage server (server1) with the one that 
should be the final one (server2). To keep the data storage from the old one 
I'm attempting to import it on the new server. Both servers are running 
OpenIndiana server build 151a.

Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0



Server 2 (new)
I have attached the disks on the new server in the same order (which shouldn't 
matter as ZFS should locate the disks anyway)
zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
are the disk/sas controller the same on both server?
-LT

Sent from my iPad

On Mar 13, 2012, at 6:10, P-O Yliniemi  wrote:

> Hello,
> 
> I'm currently replacing a temporary storage server (server1) with the one 
> that should be the final one (server2). To keep the data storage from the old 
> one I'm attempting to import it on the new server. Both servers are running 
> OpenIndiana server build 151a.
> 
> Server 1 (old)
> The zpool consists of three disks in a raidz1 configuration:
> # zpool status
>  pool: storage
> state: ONLINE
>  scan: none requested
> config:
> 
>NAMESTATE READ WRITE CKSUM
>storage ONLINE   0 0 0
>  raidz1-0  ONLINE   0 0 0
>c4d0ONLINE   0 0 0
>c4d1ONLINE   0 0 0
>c5d0ONLINE   0 0 0
> 
> errors: No known data errors
> 
> Output of format command gives:
> # format
> AVAILABLE DISK SELECTIONS:
>   0. c2t1d0 
>  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
>   1. c4d0 
>  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
>   2. c4d1 
>  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
>   3. c5d0 
>  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
>   4. c5d1 
>  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
> 
> (c5d1 was previously used as a hot spare, but I removed it as an attempt to 
> export and import the zpool without the spare)
> 
> # zpool export storage
> 
> # zpool list
> (shows only rpool)
> 
> # zpool import
>   pool: storage
> id: 17210091810759984780
>  state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
> 
>storage ONLINE
>  raidz1-0  ONLINE
>c4d0ONLINE
>c4d1ONLINE
>c5d0ONLINE
> 
> (check to see if it is importable to the old server, this has also been 
> verified since I moved back the disks to the old server yesterday to have it 
> available during the night)
> 
> zdb -l output in attached files.
> 
> ---
> 
> Server 2 (new)
> I have attached the disks on the new server in the same order (which 
> shouldn't matter as ZFS should locate the disks anyway)
> zpool import gives:
> 
> root@backup:~# zpool import
>   pool: storage
> id: 17210091810759984780
>  state: UNAVAIL
> action: The pool cannot be imported due to damaged devices or data.
> config:
> 
>storageUNAVAIL  insufficient replicas
>  raidz1-0 UNAVAIL  corrupted data
>c7t5000C50044E0F316d0  ONLINE
>c7t5000C50044A30193d0  ONLINE
>c7t5000C50044760F6Ed0  ONLINE
> 
> The problem is that all the disks are there and online, but the pool is 
> showing up as unavailable.
> 
> Any ideas on what I can do more in order to solve this problem ?
> 
> Regards,
>  PeO
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Hello,

I'm currently replacing a temporary storage server (server1) with the 
one that should be the final one (server2). To keep the data storage 
from the old one I'm attempting to import it on the new server. Both 
servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0 sec 126>

  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0

(c5d1 was previously used as a hot spare, but I removed it as an attempt 
to export and import the zpool without the spare)


# zpool export storage

# zpool list
(shows only rpool)

# zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE

(check to see if it is importable to the old server, this has also been 
verified since I moved back the disks to the old server yesterday to 
have it available during the night)


zdb -l output in attached files.

---

Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE

The problem is that all the disks are there and online, but the pool is 
showing up as unavailable.


Any ideas on what I can do more in order to solve this problem ?

Regards,
  PeO



# zdb -l c4d0s0

LABEL 0

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14478395923793210190
path: '/dev/dsk/c4d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F07HW4/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0:a'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 9273576080530492359
path: '/dev/dsk/c4d1s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F05H2Y/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0:a'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6205751126661365015
path: '/dev/dsk/c5d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F032RJ/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0:a'
whole_disk: 1
create_txg: 4

LABEL 1

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14478395923793210190
path: '/dev/dsk/c4d0s0'
devid: 'id1,