I have a ZFS node with a mirrored pool that uses one disk for each side
of the mirror from two different solaris iscsi storage/target nodes. One
of the disks in the target boxes failed, so the pool is running in a
degraded state:
[EMAIL PROTECTED]:~]# zpool status -x
pool: eq1-pool0
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scrub: none requested
config:
NAME STATE READ WRITE
CKSUM
eq1-pool0 DEGRADED 0 0
0
mirror ONLINE 0 0
0
c2t01000015170BD1D600002A00488584CDd0 ONLINE 0 0
0
c2t01000015170D84A200002A004885878Bd0 ONLINE 0 0
0
mirror ONLINE 0 0
0
c2t01000015170BD1D600002A00488584CFd0 ONLINE 0 0
0
c2t01000015170D84A200002A004885878Cd0 ONLINE 0 0
0
mirror ONLINE 0 0
0
c2t01000015170BD1D600002A00488584D0d0 ONLINE 0 0
0
c2t01000015170D84A200002A004885878Dd0 ONLINE 0 0
0
mirror ONLINE 0 0
0
c2t01000015170BD1D600002A00488584D1d0 ONLINE 0 0
0
c2t01000015170D84A200002A004885878Fd0 ONLINE 0 0
0
mirror DEGRADED 0 0
0
c2t01000015170BD1D600002A00488584D2d0 ONLINE 0 0
0
c2t01000015170D84A200002A0048858790d0 FAULTED 4 70
0 too many errors
mirror ONLINE 0 0
0
c2t01000015170BD1D600002A00488584D4d0 ONLINE 0 0
0
c2t01000015170D84A200002A0048858791d0 ONLINE 0 0
0
errors: No known data errors
c2t01000015170D84A200002A0048858790d0 corresponds to this iscsi target:
[EMAIL PROTECTED]:~]# iscsitadm list target -v eq1-stor2-pool0-disk5
Target: eq1-stor2-pool0-disk5
iSCSI Name:
iqn.1986-03.com.sun:02:03b92868-c107-cdd6-a368-ce1a30d96e9d.eq1-stor2-pool0-disk5
Connections: 4
...
ACL list:
TPGT list:
TPGT: 1
LUN information:
LUN: 0
GUID: 01000015170d84a200002a0048858790
VID: SUN
PID: SOLARIS
Type: disk
Size: 417G
Backing store: /dev/rdsk/c1t4d0s7
Status: online
cfgadm -l shows the disk in a failed state:
[EMAIL PROTECTED]:~]# cfgadm -l
Ap_Id Type Receptacle Occupant
Condition
sata0/0::dsk/c1t0d0 disk connected configured ok
sata0/1::dsk/c1t1d0 disk connected configured ok
sata0/2::dsk/c1t2d0 disk connected configured ok
sata0/3::dsk/c1t3d0 disk connected configured ok
sata0/4::dsk/c1t4d0 disk connected configured failed
sata0/5::dsk/c1t5d0 disk connected configured ok
Of course, what I want to do now is replace the bad disk while keeping
the zpool online and all other mirrors working. Do I need to delete the
iscsi target on eq1-stor2 first, or can I just do:
on eq1-stor2:
cfgadm -c unconfigure sata0/4::dsk/c1t4d0
cfgadm -c disconnect sata0/4::dsk/c1t4d0
<replace failed drive with new drive>
cfgadm -c configure sata0/4::dsk/c1t4d0
<format and label new disk with the same partition layout as the failed
disk>
on eq1-zfs1:
zpool clear
Will iscsitgt on eq1-stor1 go nuts if I disconnect the sata port and
swap disks without deleting the iscsi target first - and will it just
resume working once the new disk is in place? If I delete the iscsi
target, will the iscsi initiator on the other node just log out/drop
that target without a problem? Anything else I might be missing?
Hints appreciated.
--
Dave
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss