I'm making a script to manage my raid.This is what my script is doing to
recover my raid.

umount /dev/gvinum/r5

#remove the raid
gvinum rm -r d0
gvinum rm -r d1
gvinum rm -r d2
gvinum rm -r d3

gvinum rm r5.p0.s$1
bsdlabel -w /dev/da$1

gvinum create myraid.conf

gvinum setstate -f stale r5.p0.s$1
gvinum start r5
fsck -t ufs /dev/gvinum/r5

drive d0 device /dev/da0
drive d1 device /dev/da1
drive d2 device /dev/da2
drive d3 device /dev/da3
volume r5
plex org raid5 512k
sd drive d0
sd drive d1
sd drive d2
sd drive d3

Is this the way that I should be recovering my raid?

With hot swap do I have to do the same for SCSI and SATA, or gvinum will
auto detected?

I notice that when I change my hard disk around gvinum stop working.
I'm planning to reserved the last sector of the harddisk to record the the
ordering of the harddisk.
In case it move around the myraid.conf can be reconfigure.
Using bsdlabel, and dd to record the information.

# /dev/da0:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  a:   208879       16    unused        0     0
  b:        1   208895    unused        0     0
  c:   208896        0    unused        0     0         # "raw" part, don't

freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to