I have raid1 volume (one of two on PC) with 2 disks.

# disklabel sd5
# /dev/rsd5c:
type: SCSI
disk: SCSI disk
label: SR RAID 1
duid: 7a03a84165b3d165
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 243201
total sectors: 3907028640
boundstart: 0
boundend: 3907028640
drivedata: 0

16 partitions:
#                size           offset  fstype [fsize bsize   cpg]
  a:       3907028608                0  4.2BSD   8192 65536 52270 # /home/vmail
  c:       3907028640                0  unused


Recently I got an error in dmesg

mail# dmesg | grep retry
sd5: retrying read on block 767483392

(This happened during copying process)

and system marked volume as degraded

mail# bioctl sd5
Volume      Status               Size Device
softraid0 1 Degraded    2000398663680 sd5     RAID1
          0 Online      2000398663680 1:0.0   noencl <sd2a>
          1 Offline     2000398663680 1:1.0   noencl <sd3a>

I tried to reread this sector (and a couple around) with dd to make sure
the sector is unreadable:

mail# dd if=/dev/rsd3c of=/dev/null bs=512 count=16 skip=767483384
16+0 records in
16+0 records out
8192 bytes transferred in 0.025 secs (316536 bytes/sec)
mail# dd if=/dev/rsd5c of=/dev/null bs=512 count=16 skip=767483384
16+0 records in
16+0 records out
8192 bytes transferred in 0.050 secs (161303 bytes/sec)

but error did not appeared.
Are there any methods to check if sector is bad (preferably on the fly)?
If this is not a disk error (im going to replace cables just in case)
should i just get disk back online with
bioctl -R /dev/sd3a sd5
?

Reply via email to